Mar 7 00:46:20.091786 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Mar 7 00:46:20.091805 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Mar 6 22:32:57 -00 2026 Mar 7 00:46:20.091811 kernel: KASLR enabled Mar 7 00:46:20.091815 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 7 00:46:20.091819 kernel: printk: legacy bootconsole [pl11] enabled Mar 7 00:46:20.091824 kernel: efi: EFI v2.7 by EDK II Mar 7 00:46:20.091830 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e3f9018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Mar 7 00:46:20.091834 kernel: random: crng init done Mar 7 00:46:20.091838 kernel: secureboot: Secure boot disabled Mar 7 00:46:20.091842 kernel: ACPI: Early table checksum verification disabled Mar 7 00:46:20.091845 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Mar 7 00:46:20.091850 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:46:20.091854 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:46:20.091858 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 00:46:20.091864 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:46:20.091868 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:46:20.091872 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:46:20.091876 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:46:20.091881 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:46:20.091886 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:46:20.091890 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 7 00:46:20.091894 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 00:46:20.091898 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 7 00:46:20.091903 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 7 00:46:20.091907 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 7 00:46:20.091911 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Mar 7 00:46:20.091915 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Mar 7 00:46:20.091919 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 7 00:46:20.091924 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 7 00:46:20.091928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 7 00:46:20.091933 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 7 00:46:20.091937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 7 00:46:20.091941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 7 00:46:20.091945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 7 00:46:20.091950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 7 00:46:20.091954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 7 00:46:20.091958 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Mar 7 00:46:20.091962 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Mar 7 00:46:20.091966 kernel: Zone ranges: Mar 7 00:46:20.091971 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 7 00:46:20.091978 kernel: DMA32 empty Mar 7 00:46:20.091982 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 00:46:20.091987 kernel: Device empty Mar 7 00:46:20.091991 kernel: Movable zone start for each node Mar 7 00:46:20.091995 kernel: Early memory node ranges Mar 7 00:46:20.092000 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 7 00:46:20.092005 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Mar 7 00:46:20.092010 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Mar 7 00:46:20.092014 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Mar 7 00:46:20.092019 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Mar 7 00:46:20.092023 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Mar 7 00:46:20.092027 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 00:46:20.092032 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 7 00:46:20.092036 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 7 00:46:20.092041 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Mar 7 00:46:20.092045 kernel: psci: probing for conduit method from ACPI. Mar 7 00:46:20.092049 kernel: psci: PSCIv1.3 detected in firmware. Mar 7 00:46:20.092054 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 00:46:20.092059 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 7 00:46:20.092063 kernel: psci: SMC Calling Convention v1.4 Mar 7 00:46:20.092068 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 7 00:46:20.092072 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 7 00:46:20.092077 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 7 00:46:20.092081 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 7 00:46:20.092086 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 00:46:20.092091 kernel: Detected PIPT I-cache on CPU0 Mar 7 00:46:20.092095 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Mar 7 00:46:20.092100 kernel: CPU features: detected: GIC system register CPU interface Mar 7 00:46:20.092104 kernel: CPU features: detected: Spectre-v4 Mar 7 00:46:20.092109 kernel: CPU features: detected: Spectre-BHB Mar 7 00:46:20.092114 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 7 00:46:20.092118 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 7 00:46:20.092123 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Mar 7 00:46:20.092127 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 7 00:46:20.092132 kernel: alternatives: applying boot alternatives Mar 7 00:46:20.092137 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9c226afb416af9ef4d18a1b0d3e269f0ccb0a864e96b716716d400068481d58c Mar 7 00:46:20.092142 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 00:46:20.092146 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 00:46:20.092151 kernel: Fallback order for Node 0: 0 Mar 7 00:46:20.092155 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Mar 7 00:46:20.092160 kernel: Policy zone: Normal Mar 7 00:46:20.092165 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 00:46:20.092169 kernel: software IO TLB: area num 2. Mar 7 00:46:20.092174 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Mar 7 00:46:20.092179 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 00:46:20.092183 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 00:46:20.092188 kernel: rcu: RCU event tracing is enabled. Mar 7 00:46:20.092193 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 00:46:20.092197 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 00:46:20.092202 kernel: Tracing variant of Tasks RCU enabled. Mar 7 00:46:20.092206 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 00:46:20.092211 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 00:46:20.092216 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:46:20.092221 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:46:20.092226 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 00:46:20.092230 kernel: GICv3: 960 SPIs implemented Mar 7 00:46:20.092234 kernel: GICv3: 0 Extended SPIs implemented Mar 7 00:46:20.092239 kernel: Root IRQ handler: gic_handle_irq Mar 7 00:46:20.092243 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 7 00:46:20.092248 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Mar 7 00:46:20.092252 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 7 00:46:20.092257 kernel: ITS: No ITS available, not enabling LPIs Mar 7 00:46:20.092261 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 00:46:20.092267 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Mar 7 00:46:20.092271 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 00:46:20.092276 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Mar 7 00:46:20.092281 kernel: Console: colour dummy device 80x25 Mar 7 00:46:20.092285 kernel: printk: legacy console [tty1] enabled Mar 7 00:46:20.092290 kernel: ACPI: Core revision 20240827 Mar 7 00:46:20.092295 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Mar 7 00:46:20.092299 kernel: pid_max: default: 32768 minimum: 301 Mar 7 00:46:20.092304 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 7 00:46:20.092309 kernel: landlock: Up and running. Mar 7 00:46:20.092314 kernel: SELinux: Initializing. Mar 7 00:46:20.092319 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:46:20.092323 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:46:20.092328 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Mar 7 00:46:20.092333 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 7 00:46:20.092341 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 00:46:20.092347 kernel: rcu: Hierarchical SRCU implementation. Mar 7 00:46:20.092351 kernel: rcu: Max phase no-delay instances is 400. Mar 7 00:46:20.092356 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 7 00:46:20.092361 kernel: Remapping and enabling EFI services. Mar 7 00:46:20.092366 kernel: smp: Bringing up secondary CPUs ... Mar 7 00:46:20.092371 kernel: Detected PIPT I-cache on CPU1 Mar 7 00:46:20.092377 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 7 00:46:20.092381 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Mar 7 00:46:20.092386 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 00:46:20.092391 kernel: SMP: Total of 2 processors activated. Mar 7 00:46:20.092396 kernel: CPU: All CPU(s) started at EL1 Mar 7 00:46:20.092402 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 00:46:20.092407 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 7 00:46:20.092412 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 7 00:46:20.092417 kernel: CPU features: detected: Common not Private translations Mar 7 00:46:20.092421 kernel: CPU features: detected: CRC32 instructions Mar 7 00:46:20.092426 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Mar 7 00:46:20.092431 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 7 00:46:20.092436 kernel: CPU features: detected: LSE atomic instructions Mar 7 00:46:20.092441 kernel: CPU features: detected: Privileged Access Never Mar 7 00:46:20.092446 kernel: CPU features: detected: Speculation barrier (SB) Mar 7 00:46:20.092451 kernel: CPU features: detected: TLB range maintenance instructions Mar 7 00:46:20.092456 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 7 00:46:20.092461 kernel: CPU features: detected: Scalable Vector Extension Mar 7 00:46:20.092465 kernel: alternatives: applying system-wide alternatives Mar 7 00:46:20.092470 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Mar 7 00:46:20.092475 kernel: SVE: maximum available vector length 16 bytes per vector Mar 7 00:46:20.092480 kernel: SVE: default vector length 16 bytes per vector Mar 7 00:46:20.092485 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Mar 7 00:46:20.092491 kernel: devtmpfs: initialized Mar 7 00:46:20.092496 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 00:46:20.092501 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 00:46:20.092506 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 7 00:46:20.092511 kernel: 0 pages in range for non-PLT usage Mar 7 00:46:20.092515 kernel: 508400 pages in range for PLT usage Mar 7 00:46:20.092520 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 00:46:20.092525 kernel: SMBIOS 3.1.0 present. Mar 7 00:46:20.092531 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 7 00:46:20.092535 kernel: DMI: Memory slots populated: 2/2 Mar 7 00:46:20.092540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 00:46:20.092545 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 00:46:20.092550 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 00:46:20.092555 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 00:46:20.092560 kernel: audit: initializing netlink subsys (disabled) Mar 7 00:46:20.092565 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Mar 7 00:46:20.092570 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 00:46:20.092575 kernel: cpuidle: using governor menu Mar 7 00:46:20.092580 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 00:46:20.092585 kernel: ASID allocator initialised with 32768 entries Mar 7 00:46:20.092590 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 00:46:20.092595 kernel: Serial: AMBA PL011 UART driver Mar 7 00:46:20.092600 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 00:46:20.092604 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 00:46:20.092609 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 00:46:20.092614 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 00:46:20.092620 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 00:46:20.092624 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 00:46:20.092629 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 00:46:20.092634 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 00:46:20.092639 kernel: ACPI: Added _OSI(Module Device) Mar 7 00:46:20.092643 kernel: ACPI: Added _OSI(Processor Device) Mar 7 00:46:20.092648 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 00:46:20.092653 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 00:46:20.092657 kernel: ACPI: Interpreter enabled Mar 7 00:46:20.092663 kernel: ACPI: Using GIC for interrupt routing Mar 7 00:46:20.092668 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 7 00:46:20.092673 kernel: printk: legacy console [ttyAMA0] enabled Mar 7 00:46:20.092677 kernel: printk: legacy bootconsole [pl11] disabled Mar 7 00:46:20.092682 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 7 00:46:20.092687 kernel: ACPI: CPU0 has been hot-added Mar 7 00:46:20.092692 kernel: ACPI: CPU1 has been hot-added Mar 7 00:46:20.092696 kernel: iommu: Default domain type: Translated Mar 7 00:46:20.092701 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 00:46:20.092723 kernel: efivars: Registered efivars operations Mar 7 00:46:20.092728 kernel: vgaarb: loaded Mar 7 00:46:20.092733 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 00:46:20.092738 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 00:46:20.092742 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 00:46:20.092747 kernel: pnp: PnP ACPI init Mar 7 00:46:20.092752 kernel: pnp: PnP ACPI: found 0 devices Mar 7 00:46:20.092757 kernel: NET: Registered PF_INET protocol family Mar 7 00:46:20.092761 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 00:46:20.092766 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 00:46:20.092772 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 00:46:20.092777 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 00:46:20.092782 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 00:46:20.092787 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 00:46:20.092791 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:46:20.092796 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:46:20.092801 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 00:46:20.092806 kernel: PCI: CLS 0 bytes, default 64 Mar 7 00:46:20.092810 kernel: kvm [1]: HYP mode not available Mar 7 00:46:20.092816 kernel: Initialise system trusted keyrings Mar 7 00:46:20.092821 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 00:46:20.092825 kernel: Key type asymmetric registered Mar 7 00:46:20.092830 kernel: Asymmetric key parser 'x509' registered Mar 7 00:46:20.092835 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 7 00:46:20.092839 kernel: io scheduler mq-deadline registered Mar 7 00:46:20.092844 kernel: io scheduler kyber registered Mar 7 00:46:20.092849 kernel: io scheduler bfq registered Mar 7 00:46:20.092853 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 00:46:20.092859 kernel: thunder_xcv, ver 1.0 Mar 7 00:46:20.092864 kernel: thunder_bgx, ver 1.0 Mar 7 00:46:20.092868 kernel: nicpf, ver 1.0 Mar 7 00:46:20.092873 kernel: nicvf, ver 1.0 Mar 7 00:46:20.092986 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 00:46:20.093037 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T00:46:19 UTC (1772844379) Mar 7 00:46:20.093043 kernel: efifb: probing for efifb Mar 7 00:46:20.093049 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 00:46:20.093054 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 00:46:20.093059 kernel: efifb: scrolling: redraw Mar 7 00:46:20.093064 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 00:46:20.093068 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 00:46:20.093073 kernel: fb0: EFI VGA frame buffer device Mar 7 00:46:20.093078 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 7 00:46:20.093082 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 00:46:20.093087 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Mar 7 00:46:20.093093 kernel: watchdog: NMI not fully supported Mar 7 00:46:20.093098 kernel: watchdog: Hard watchdog permanently disabled Mar 7 00:46:20.093103 kernel: NET: Registered PF_INET6 protocol family Mar 7 00:46:20.093107 kernel: Segment Routing with IPv6 Mar 7 00:46:20.093112 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 00:46:20.093117 kernel: NET: Registered PF_PACKET protocol family Mar 7 00:46:20.093121 kernel: Key type dns_resolver registered Mar 7 00:46:20.093126 kernel: registered taskstats version 1 Mar 7 00:46:20.093131 kernel: Loading compiled-in X.509 certificates Mar 7 00:46:20.093136 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 7eb2f80205b35f103c9dbaa59957e2e5fe845c0f' Mar 7 00:46:20.093141 kernel: Demotion targets for Node 0: null Mar 7 00:46:20.093146 kernel: Key type .fscrypt registered Mar 7 00:46:20.093150 kernel: Key type fscrypt-provisioning registered Mar 7 00:46:20.093155 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 00:46:20.093160 kernel: ima: Allocated hash algorithm: sha1 Mar 7 00:46:20.093165 kernel: ima: No architecture policies found Mar 7 00:46:20.093169 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 00:46:20.093174 kernel: clk: Disabling unused clocks Mar 7 00:46:20.093179 kernel: PM: genpd: Disabling unused power domains Mar 7 00:46:20.093184 kernel: Warning: unable to open an initial console. Mar 7 00:46:20.093189 kernel: Freeing unused kernel memory: 39552K Mar 7 00:46:20.093194 kernel: Run /init as init process Mar 7 00:46:20.093199 kernel: with arguments: Mar 7 00:46:20.093203 kernel: /init Mar 7 00:46:20.093208 kernel: with environment: Mar 7 00:46:20.093213 kernel: HOME=/ Mar 7 00:46:20.093217 kernel: TERM=linux Mar 7 00:46:20.093223 systemd[1]: Successfully made /usr/ read-only. Mar 7 00:46:20.093231 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 7 00:46:20.093237 systemd[1]: Detected virtualization microsoft. Mar 7 00:46:20.093242 systemd[1]: Detected architecture arm64. Mar 7 00:46:20.093247 systemd[1]: Running in initrd. Mar 7 00:46:20.093252 systemd[1]: No hostname configured, using default hostname. Mar 7 00:46:20.093257 systemd[1]: Hostname set to . Mar 7 00:46:20.093262 systemd[1]: Initializing machine ID from random generator. Mar 7 00:46:20.093268 systemd[1]: Queued start job for default target initrd.target. Mar 7 00:46:20.093273 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:46:20.093278 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:46:20.093284 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 00:46:20.093289 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:46:20.093295 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 00:46:20.093300 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 00:46:20.093307 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 00:46:20.093313 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 00:46:20.093318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:46:20.093323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:46:20.093328 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:46:20.093334 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:46:20.093339 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:46:20.093344 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:46:20.093350 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:46:20.093355 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:46:20.093360 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:46:20.093366 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 7 00:46:20.093371 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:46:20.093376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:46:20.093381 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:46:20.093387 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:46:20.093392 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 00:46:20.093398 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:46:20.093403 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 00:46:20.093409 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 7 00:46:20.093414 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 00:46:20.093419 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:46:20.093424 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:46:20.093439 systemd-journald[225]: Collecting audit messages is disabled. Mar 7 00:46:20.093454 systemd-journald[225]: Journal started Mar 7 00:46:20.093468 systemd-journald[225]: Runtime Journal (/run/log/journal/707bcd1e3f6e49f494dbc908fa55fbc8) is 8M, max 78.3M, 70.3M free. Mar 7 00:46:20.095748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:46:20.101055 systemd-modules-load[227]: Inserted module 'overlay' Mar 7 00:46:20.119722 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 00:46:20.119762 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:46:20.129901 kernel: Bridge firewalling registered Mar 7 00:46:20.129997 systemd-modules-load[227]: Inserted module 'br_netfilter' Mar 7 00:46:20.132212 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 00:46:20.143272 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:46:20.148776 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 00:46:20.157026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:46:20.164500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:46:20.175444 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:46:20.207288 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:46:20.216537 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:46:20.229699 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:46:20.248987 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:46:20.258259 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:46:20.260568 systemd-tmpfiles[250]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 7 00:46:20.270474 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:46:20.282181 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:46:20.295032 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 00:46:20.310778 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:46:20.322792 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:46:20.341732 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9c226afb416af9ef4d18a1b0d3e269f0ccb0a864e96b716716d400068481d58c Mar 7 00:46:20.342451 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:46:20.389073 systemd-resolved[263]: Positive Trust Anchors: Mar 7 00:46:20.389087 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:46:20.389107 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:46:20.390722 systemd-resolved[263]: Defaulting to hostname 'linux'. Mar 7 00:46:20.393022 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:46:20.398702 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:46:20.482722 kernel: SCSI subsystem initialized Mar 7 00:46:20.488732 kernel: Loading iSCSI transport class v2.0-870. Mar 7 00:46:20.495734 kernel: iscsi: registered transport (tcp) Mar 7 00:46:20.508854 kernel: iscsi: registered transport (qla4xxx) Mar 7 00:46:20.508909 kernel: QLogic iSCSI HBA Driver Mar 7 00:46:20.522212 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 00:46:20.542571 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:46:20.555313 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 00:46:20.597354 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 00:46:20.603843 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 00:46:20.676725 kernel: raid6: neonx8 gen() 18541 MB/s Mar 7 00:46:20.693714 kernel: raid6: neonx4 gen() 18549 MB/s Mar 7 00:46:20.712715 kernel: raid6: neonx2 gen() 17077 MB/s Mar 7 00:46:20.732713 kernel: raid6: neonx1 gen() 15020 MB/s Mar 7 00:46:20.751712 kernel: raid6: int64x8 gen() 10536 MB/s Mar 7 00:46:20.771820 kernel: raid6: int64x4 gen() 10617 MB/s Mar 7 00:46:20.791737 kernel: raid6: int64x2 gen() 8970 MB/s Mar 7 00:46:20.813253 kernel: raid6: int64x1 gen() 7015 MB/s Mar 7 00:46:20.813333 kernel: raid6: using algorithm neonx4 gen() 18549 MB/s Mar 7 00:46:20.835553 kernel: raid6: .... xor() 15144 MB/s, rmw enabled Mar 7 00:46:20.835626 kernel: raid6: using neon recovery algorithm Mar 7 00:46:20.844468 kernel: xor: measuring software checksum speed Mar 7 00:46:20.844488 kernel: 8regs : 28513 MB/sec Mar 7 00:46:20.849852 kernel: 32regs : 27594 MB/sec Mar 7 00:46:20.849859 kernel: arm64_neon : 37593 MB/sec Mar 7 00:46:20.852926 kernel: xor: using function: arm64_neon (37593 MB/sec) Mar 7 00:46:20.891731 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 00:46:20.897243 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:46:20.906839 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:46:20.927612 systemd-udevd[474]: Using default interface naming scheme 'v255'. Mar 7 00:46:20.930574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:46:20.938834 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 00:46:20.966166 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Mar 7 00:46:20.988062 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:46:20.994773 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:46:21.041506 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:46:21.054182 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 00:46:21.114731 kernel: hv_vmbus: Vmbus version:5.3 Mar 7 00:46:21.134428 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 00:46:21.134489 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 00:46:21.134497 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 7 00:46:21.125041 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:46:21.145903 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 00:46:21.125153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:46:21.152833 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:46:21.185127 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 00:46:21.185151 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 00:46:21.185159 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 00:46:21.185165 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 00:46:21.185171 kernel: scsi host0: storvsc_host_t Mar 7 00:46:21.174774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:46:21.193335 kernel: scsi host1: storvsc_host_t Mar 7 00:46:21.185272 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 7 00:46:21.211598 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 7 00:46:21.218134 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 00:46:21.212156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:46:21.212456 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:46:21.224858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:46:21.248824 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 00:46:21.254725 kernel: hv_netvsc 7ced8dd3-c7b9-7ced-8dd3-c7b97ced8dd3 eth0: VF slot 1 added Mar 7 00:46:21.277159 kernel: PTP clock support registered Mar 7 00:46:21.277208 kernel: hv_vmbus: registering driver hv_pci Mar 7 00:46:21.277216 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 00:46:21.277365 kernel: hv_pci e2c3b9a2-4cb2-4942-856a-03914e164a39: PCI VMBus probing: Using version 0x10004 Mar 7 00:46:21.272953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:46:21.321855 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 00:46:21.322045 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 00:46:21.322053 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 00:46:21.322119 kernel: hv_pci e2c3b9a2-4cb2-4942-856a-03914e164a39: PCI host bridge to bus 4cb2:00 Mar 7 00:46:21.322185 kernel: hv_vmbus: registering driver hv_utils Mar 7 00:46:21.322192 kernel: pci_bus 4cb2:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 7 00:46:21.322269 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 00:46:21.322329 kernel: pci_bus 4cb2:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 00:46:21.322383 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 00:46:21.322441 kernel: pci 4cb2:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Mar 7 00:46:21.329887 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 00:46:21.335719 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 00:46:21.335756 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 00:46:21.206663 systemd-resolved[263]: Clock change detected. Flushing caches. Mar 7 00:46:21.225562 kernel: pci 4cb2:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 7 00:46:21.225600 systemd-journald[225]: Time jumped backwards, rotating. Mar 7 00:46:21.225629 kernel: pci 4cb2:00:02.0: enabling Extended Tags Mar 7 00:46:21.225640 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:46:21.230443 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 00:46:21.230578 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 00:46:21.246989 kernel: pci 4cb2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4cb2:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Mar 7 00:46:21.247174 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 00:46:21.247181 kernel: pci_bus 4cb2:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 00:46:21.251675 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 00:46:21.251806 kernel: pci 4cb2:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Mar 7 00:46:21.273692 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#192 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 00:46:21.296714 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#227 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 00:46:21.335843 kernel: mlx5_core 4cb2:00:02.0: enabling device (0000 -> 0002) Mar 7 00:46:21.343977 kernel: mlx5_core 4cb2:00:02.0: PTM is not supported by PCIe Mar 7 00:46:21.344076 kernel: mlx5_core 4cb2:00:02.0: firmware version: 16.30.5026 Mar 7 00:46:21.527683 kernel: hv_netvsc 7ced8dd3-c7b9-7ced-8dd3-c7b97ced8dd3 eth0: VF registering: eth1 Mar 7 00:46:21.527905 kernel: mlx5_core 4cb2:00:02.0 eth1: joined to eth0 Mar 7 00:46:21.535812 kernel: mlx5_core 4cb2:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 7 00:46:21.545674 kernel: mlx5_core 4cb2:00:02.0 enP19634s1: renamed from eth1 Mar 7 00:46:21.702651 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 00:46:21.761089 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 00:46:21.796217 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 00:46:21.866170 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 00:46:21.871607 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 00:46:21.877411 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 00:46:21.900952 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 00:46:21.906544 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:46:21.915762 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:46:21.925729 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:46:21.935822 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 00:46:21.956683 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:46:21.968872 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:46:22.982934 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:46:22.982983 disk-uuid[655]: The operation has completed successfully. Mar 7 00:46:23.052449 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 00:46:23.053803 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 00:46:23.086132 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 00:46:23.111164 sh[821]: Success Mar 7 00:46:23.145098 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 00:46:23.145165 kernel: device-mapper: uevent: version 1.0.3 Mar 7 00:46:23.150288 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 7 00:46:23.160832 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 7 00:46:23.396953 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 00:46:23.405247 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 00:46:23.423092 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 00:46:23.443698 kernel: BTRFS: device fsid 376b0ad0-b1fc-4099-8019-6f1f3d92d570 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (839) Mar 7 00:46:23.443734 kernel: BTRFS info (device dm-0): first mount of filesystem 376b0ad0-b1fc-4099-8019-6f1f3d92d570 Mar 7 00:46:23.452973 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:46:23.779971 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 7 00:46:23.780053 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 7 00:46:23.811373 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 00:46:23.818674 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 7 00:46:23.823491 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 00:46:23.824282 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 00:46:23.845473 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 00:46:23.877683 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (862) Mar 7 00:46:23.888627 kernel: BTRFS info (device sda6): first mount of filesystem a2920a34-fe1c-42ba-814e-fd8c35911ce4 Mar 7 00:46:23.888691 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:46:23.914534 kernel: BTRFS info (device sda6): turning on async discard Mar 7 00:46:23.914601 kernel: BTRFS info (device sda6): enabling free space tree Mar 7 00:46:23.923704 kernel: BTRFS info (device sda6): last unmount of filesystem a2920a34-fe1c-42ba-814e-fd8c35911ce4 Mar 7 00:46:23.925206 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 00:46:23.931325 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 00:46:23.980410 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:46:23.993504 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:46:24.031159 systemd-networkd[1008]: lo: Link UP Mar 7 00:46:24.031167 systemd-networkd[1008]: lo: Gained carrier Mar 7 00:46:24.031899 systemd-networkd[1008]: Enumeration completed Mar 7 00:46:24.032005 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:46:24.037069 systemd[1]: Reached target network.target - Network. Mar 7 00:46:24.039894 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:46:24.039897 systemd-networkd[1008]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:46:24.115674 kernel: mlx5_core 4cb2:00:02.0 enP19634s1: Link up Mar 7 00:46:24.149668 kernel: hv_netvsc 7ced8dd3-c7b9-7ced-8dd3-c7b97ced8dd3 eth0: Data path switched to VF: enP19634s1 Mar 7 00:46:24.150044 systemd-networkd[1008]: enP19634s1: Link UP Mar 7 00:46:24.150101 systemd-networkd[1008]: eth0: Link UP Mar 7 00:46:24.150192 systemd-networkd[1008]: eth0: Gained carrier Mar 7 00:46:24.150205 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:46:24.169915 systemd-networkd[1008]: enP19634s1: Gained carrier Mar 7 00:46:24.179696 systemd-networkd[1008]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 00:46:25.077009 ignition[949]: Ignition 2.22.0 Mar 7 00:46:25.077023 ignition[949]: Stage: fetch-offline Mar 7 00:46:25.081403 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:46:25.077123 ignition[949]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:46:25.089402 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 00:46:25.077128 ignition[949]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:46:25.077198 ignition[949]: parsed url from cmdline: "" Mar 7 00:46:25.077200 ignition[949]: no config URL provided Mar 7 00:46:25.077208 ignition[949]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:46:25.077213 ignition[949]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:46:25.077217 ignition[949]: failed to fetch config: resource requires networking Mar 7 00:46:25.077431 ignition[949]: Ignition finished successfully Mar 7 00:46:25.123478 ignition[1017]: Ignition 2.22.0 Mar 7 00:46:25.123483 ignition[1017]: Stage: fetch Mar 7 00:46:25.123752 ignition[1017]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:46:25.123760 ignition[1017]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:46:25.123859 ignition[1017]: parsed url from cmdline: "" Mar 7 00:46:25.123861 ignition[1017]: no config URL provided Mar 7 00:46:25.123865 ignition[1017]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:46:25.123872 ignition[1017]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:46:25.123889 ignition[1017]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 00:46:25.221205 ignition[1017]: GET result: OK Mar 7 00:46:25.222036 ignition[1017]: config has been read from IMDS userdata Mar 7 00:46:25.222061 ignition[1017]: parsing config with SHA512: 797fe543e0f014a48ff1d9f3e2b30d3dfabb91c01353ccf47a214750e276a057e78478615a334fca66b978ef276fabd2d357f5db5f4639b0f493fd34e2f3979d Mar 7 00:46:25.225443 unknown[1017]: fetched base config from "system" Mar 7 00:46:25.225768 ignition[1017]: fetch: fetch complete Mar 7 00:46:25.225447 unknown[1017]: fetched base config from "system" Mar 7 00:46:25.225771 ignition[1017]: fetch: fetch passed Mar 7 00:46:25.225451 unknown[1017]: fetched user config from "azure" Mar 7 00:46:25.225821 ignition[1017]: Ignition finished successfully Mar 7 00:46:25.228691 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 00:46:25.237187 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 00:46:25.272748 ignition[1024]: Ignition 2.22.0 Mar 7 00:46:25.272757 ignition[1024]: Stage: kargs Mar 7 00:46:25.272918 ignition[1024]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:46:25.278525 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 00:46:25.272925 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:46:25.284196 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 00:46:25.273456 ignition[1024]: kargs: kargs passed Mar 7 00:46:25.273503 ignition[1024]: Ignition finished successfully Mar 7 00:46:25.315891 ignition[1030]: Ignition 2.22.0 Mar 7 00:46:25.315904 ignition[1030]: Stage: disks Mar 7 00:46:25.319811 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 00:46:25.316071 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:46:25.326234 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 00:46:25.316079 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:46:25.334921 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:46:25.316619 ignition[1030]: disks: disks passed Mar 7 00:46:25.343911 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:46:25.316676 ignition[1030]: Ignition finished successfully Mar 7 00:46:25.352538 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:46:25.361308 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:46:25.370675 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 00:46:25.449934 systemd-fsck[1038]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Mar 7 00:46:25.457812 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 00:46:25.463910 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 00:46:25.701683 kernel: EXT4-fs (sda9): mounted filesystem dc3cd474-cc91-4aa5-8987-77b9669cedbb r/w with ordered data mode. Quota mode: none. Mar 7 00:46:25.702081 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 00:46:25.709028 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 00:46:25.728617 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:46:25.732991 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 00:46:25.746781 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 00:46:25.757743 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 00:46:25.757780 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:46:25.763970 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 00:46:25.780747 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 00:46:25.804701 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1052) Mar 7 00:46:25.814422 kernel: BTRFS info (device sda6): first mount of filesystem a2920a34-fe1c-42ba-814e-fd8c35911ce4 Mar 7 00:46:25.814460 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:46:25.823779 kernel: BTRFS info (device sda6): turning on async discard Mar 7 00:46:25.823826 kernel: BTRFS info (device sda6): enabling free space tree Mar 7 00:46:25.825129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:46:25.866798 systemd-networkd[1008]: eth0: Gained IPv6LL Mar 7 00:46:26.206237 coreos-metadata[1054]: Mar 07 00:46:26.206 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 00:46:26.214667 coreos-metadata[1054]: Mar 07 00:46:26.214 INFO Fetch successful Mar 7 00:46:26.214667 coreos-metadata[1054]: Mar 07 00:46:26.214 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 00:46:26.228176 coreos-metadata[1054]: Mar 07 00:46:26.227 INFO Fetch successful Mar 7 00:46:26.243704 coreos-metadata[1054]: Mar 07 00:46:26.243 INFO wrote hostname ci-4459.2.3-n-9877c76adf to /sysroot/etc/hostname Mar 7 00:46:26.250803 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:46:26.482948 initrd-setup-root[1082]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 00:46:26.505675 initrd-setup-root[1089]: cut: /sysroot/etc/group: No such file or directory Mar 7 00:46:26.512962 initrd-setup-root[1096]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 00:46:26.534840 initrd-setup-root[1103]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 00:46:28.033448 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 00:46:28.039561 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 00:46:28.063346 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 00:46:28.071945 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 00:46:28.085154 kernel: BTRFS info (device sda6): last unmount of filesystem a2920a34-fe1c-42ba-814e-fd8c35911ce4 Mar 7 00:46:28.111415 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 00:46:28.118998 ignition[1170]: INFO : Ignition 2.22.0 Mar 7 00:46:28.118998 ignition[1170]: INFO : Stage: mount Mar 7 00:46:28.118998 ignition[1170]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:46:28.118998 ignition[1170]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:46:28.118998 ignition[1170]: INFO : mount: mount passed Mar 7 00:46:28.118998 ignition[1170]: INFO : Ignition finished successfully Mar 7 00:46:28.122896 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 00:46:28.131114 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 00:46:28.153759 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:46:28.182669 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1183) Mar 7 00:46:28.193193 kernel: BTRFS info (device sda6): first mount of filesystem a2920a34-fe1c-42ba-814e-fd8c35911ce4 Mar 7 00:46:28.193233 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:46:28.202794 kernel: BTRFS info (device sda6): turning on async discard Mar 7 00:46:28.202836 kernel: BTRFS info (device sda6): enabling free space tree Mar 7 00:46:28.205174 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:46:28.239304 ignition[1200]: INFO : Ignition 2.22.0 Mar 7 00:46:28.239304 ignition[1200]: INFO : Stage: files Mar 7 00:46:28.245733 ignition[1200]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:46:28.245733 ignition[1200]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:46:28.245733 ignition[1200]: DEBUG : files: compiled without relabeling support, skipping Mar 7 00:46:28.259858 ignition[1200]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 00:46:28.259858 ignition[1200]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 00:46:28.305786 ignition[1200]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 00:46:28.311501 ignition[1200]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 00:46:28.311501 ignition[1200]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 00:46:28.307027 unknown[1200]: wrote ssh authorized keys file for user: core Mar 7 00:46:28.358767 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:46:28.367138 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 00:46:28.391501 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 00:46:28.541314 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:46:28.549502 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 00:46:28.549502 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 00:46:28.549502 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:46:28.549502 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:46:28.549502 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:46:28.549502 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:46:28.549502 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:46:28.549502 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:46:28.609577 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:46:28.609577 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:46:28.609577 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:46:28.609577 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:46:28.609577 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:46:28.609577 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 7 00:46:28.983472 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 00:46:29.399350 ignition[1200]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:46:29.399350 ignition[1200]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 00:46:29.443920 ignition[1200]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:46:29.453735 ignition[1200]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:46:29.453735 ignition[1200]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 00:46:29.453735 ignition[1200]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 00:46:29.453735 ignition[1200]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 00:46:29.453735 ignition[1200]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:46:29.453735 ignition[1200]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:46:29.453735 ignition[1200]: INFO : files: files passed Mar 7 00:46:29.453735 ignition[1200]: INFO : Ignition finished successfully Mar 7 00:46:29.455852 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 00:46:29.466894 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 00:46:29.489170 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 00:46:29.502840 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 00:46:29.502930 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 00:46:29.553298 initrd-setup-root-after-ignition[1230]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:46:29.553298 initrd-setup-root-after-ignition[1230]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:46:29.567344 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:46:29.560792 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:46:29.573144 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 00:46:29.584820 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 00:46:29.628637 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 00:46:29.628770 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 00:46:29.638669 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 00:46:29.652671 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 00:46:29.657310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 00:46:29.658075 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 00:46:29.698296 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:46:29.705547 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 00:46:29.732272 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:46:29.738169 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:46:29.747945 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 00:46:29.756873 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 00:46:29.756988 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:46:29.769855 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 00:46:29.774511 systemd[1]: Stopped target basic.target - Basic System. Mar 7 00:46:29.784150 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 00:46:29.792600 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:46:29.801950 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 00:46:29.811362 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 7 00:46:29.821407 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 00:46:29.830602 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:46:29.840658 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 00:46:29.849564 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 00:46:29.858833 systemd[1]: Stopped target swap.target - Swaps. Mar 7 00:46:29.867033 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 00:46:29.867143 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:46:29.878722 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:46:29.883587 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:46:29.893271 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 00:46:29.897272 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:46:29.903259 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 00:46:29.903364 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 00:46:29.917344 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 00:46:29.917437 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:46:29.922994 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 00:46:29.923063 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 00:46:29.931573 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 00:46:30.009326 ignition[1254]: INFO : Ignition 2.22.0 Mar 7 00:46:30.009326 ignition[1254]: INFO : Stage: umount Mar 7 00:46:30.009326 ignition[1254]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:46:30.009326 ignition[1254]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 00:46:30.009326 ignition[1254]: INFO : umount: umount passed Mar 7 00:46:30.009326 ignition[1254]: INFO : Ignition finished successfully Mar 7 00:46:29.931643 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:46:29.944627 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 00:46:29.959848 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 00:46:29.960018 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:46:29.972815 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 00:46:29.993774 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 00:46:29.993964 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:46:30.007894 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 00:46:30.007991 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:46:30.018819 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 00:46:30.018901 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 00:46:30.027912 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 00:46:30.029245 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 00:46:30.029433 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 00:46:30.037388 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 00:46:30.037440 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 00:46:30.048717 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 00:46:30.048772 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 00:46:30.056233 systemd[1]: Stopped target network.target - Network. Mar 7 00:46:30.065249 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 00:46:30.065330 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:46:30.079420 systemd[1]: Stopped target paths.target - Path Units. Mar 7 00:46:30.088281 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 00:46:30.097684 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:46:30.105360 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 00:46:30.115151 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 00:46:30.123502 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 00:46:30.123549 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:46:30.132579 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 00:46:30.132606 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:46:30.140811 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 00:46:30.140865 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 00:46:30.149796 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 00:46:30.149823 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 00:46:30.163731 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 00:46:30.171420 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 00:46:30.181431 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 00:46:30.181519 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 00:46:30.190284 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 00:46:30.190355 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 00:46:30.199459 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 00:46:30.199556 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 00:46:30.214003 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 7 00:46:30.420551 kernel: hv_netvsc 7ced8dd3-c7b9-7ced-8dd3-c7b97ced8dd3 eth0: Data path switched from VF: enP19634s1 Mar 7 00:46:30.214197 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 00:46:30.214284 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 00:46:30.226479 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 7 00:46:30.228348 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 7 00:46:30.236838 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 00:46:30.236881 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:46:30.246781 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 00:46:30.246842 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 00:46:30.255838 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 00:46:30.276755 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 00:46:30.276830 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:46:30.287138 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:46:30.287188 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:46:30.294891 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 00:46:30.294925 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 00:46:30.299557 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 00:46:30.299589 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:46:30.312059 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:46:30.320704 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 7 00:46:30.320757 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 7 00:46:30.344320 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 00:46:30.344535 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:46:30.354437 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 00:46:30.354470 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 00:46:30.363630 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 00:46:30.363676 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:46:30.372244 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 00:46:30.372287 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:46:30.385637 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 00:46:30.385689 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 00:46:30.404525 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:46:30.404573 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:46:30.425368 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 00:46:30.442877 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 7 00:46:30.442953 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:46:30.461904 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 00:46:30.461961 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:46:30.472396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:46:30.472446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:46:30.482908 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 7 00:46:30.665154 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Mar 7 00:46:30.482953 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 7 00:46:30.482979 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 7 00:46:30.483253 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 00:46:30.483338 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 00:46:30.491591 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 00:46:30.491669 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 00:46:30.501308 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 00:46:30.510934 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 00:46:30.541110 systemd[1]: Switching root. Mar 7 00:46:30.705773 systemd-journald[225]: Journal stopped Mar 7 00:46:35.219530 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 00:46:35.219549 kernel: SELinux: policy capability open_perms=1 Mar 7 00:46:35.219557 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 00:46:35.219562 kernel: SELinux: policy capability always_check_network=0 Mar 7 00:46:35.219567 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 00:46:35.219574 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 00:46:35.219580 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 00:46:35.219585 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 00:46:35.219592 kernel: SELinux: policy capability userspace_initial_context=0 Mar 7 00:46:35.219597 kernel: audit: type=1403 audit(1772844391.673:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 00:46:35.219604 systemd[1]: Successfully loaded SELinux policy in 205.271ms. Mar 7 00:46:35.219612 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.402ms. Mar 7 00:46:35.219619 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 7 00:46:35.219625 systemd[1]: Detected virtualization microsoft. Mar 7 00:46:35.219631 systemd[1]: Detected architecture arm64. Mar 7 00:46:35.219637 systemd[1]: Detected first boot. Mar 7 00:46:35.219644 systemd[1]: Hostname set to . Mar 7 00:46:35.219650 systemd[1]: Initializing machine ID from random generator. Mar 7 00:46:35.219673 zram_generator::config[1296]: No configuration found. Mar 7 00:46:35.219680 kernel: NET: Registered PF_VSOCK protocol family Mar 7 00:46:35.219686 systemd[1]: Populated /etc with preset unit settings. Mar 7 00:46:35.219692 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 7 00:46:35.219698 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 00:46:35.219705 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 00:46:35.219711 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 00:46:35.219717 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 00:46:35.219724 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 00:46:35.219730 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 00:46:35.219737 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 00:46:35.219743 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 00:46:35.219750 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 00:46:35.219757 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 00:46:35.219763 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 00:46:35.219769 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:46:35.219775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:46:35.219781 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 00:46:35.219787 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 00:46:35.219794 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 00:46:35.219801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:46:35.219807 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 7 00:46:35.219815 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:46:35.219821 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:46:35.219828 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 00:46:35.219834 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 00:46:35.219840 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 00:46:35.219847 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 00:46:35.219853 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:46:35.219860 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:46:35.219866 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:46:35.219872 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:46:35.219879 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 00:46:35.219885 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 00:46:35.219893 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 7 00:46:35.219899 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:46:35.219905 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:46:35.219911 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:46:35.219917 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 00:46:35.219923 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 00:46:35.219930 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 00:46:35.219937 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 00:46:35.219943 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 00:46:35.219950 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 00:46:35.219956 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 00:46:35.219963 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 00:46:35.219969 systemd[1]: Reached target machines.target - Containers. Mar 7 00:46:35.219975 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 00:46:35.219982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:46:35.219989 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:46:35.219996 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 00:46:35.220002 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:46:35.220008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:46:35.220016 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:46:35.220022 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 00:46:35.220029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:46:35.220035 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 00:46:35.220042 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 00:46:35.220049 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 00:46:35.220056 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 00:46:35.220062 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 00:46:35.220069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 00:46:35.220075 kernel: fuse: init (API version 7.41) Mar 7 00:46:35.220081 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:46:35.220087 kernel: loop: module loaded Mar 7 00:46:35.220093 kernel: ACPI: bus type drm_connector registered Mar 7 00:46:35.220100 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:46:35.220107 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 00:46:35.220113 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 00:46:35.220135 systemd-journald[1393]: Collecting audit messages is disabled. Mar 7 00:46:35.220151 systemd-journald[1393]: Journal started Mar 7 00:46:35.220166 systemd-journald[1393]: Runtime Journal (/run/log/journal/d7a10be63ab2412daa106f86e369e050) is 8M, max 78.3M, 70.3M free. Mar 7 00:46:34.493980 systemd[1]: Queued start job for default target multi-user.target. Mar 7 00:46:34.501127 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 00:46:34.501523 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 00:46:34.501823 systemd[1]: systemd-journald.service: Consumed 2.525s CPU time. Mar 7 00:46:35.231887 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 7 00:46:35.244422 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:46:35.251951 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 00:46:35.251995 systemd[1]: Stopped verity-setup.service. Mar 7 00:46:35.266449 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:46:35.268802 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 00:46:35.273145 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 00:46:35.278424 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 00:46:35.283370 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 00:46:35.288054 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 00:46:35.292799 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 00:46:35.301191 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 00:46:35.306203 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:46:35.311706 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 00:46:35.311844 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 00:46:35.317197 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:46:35.317324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:46:35.322129 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:46:35.322257 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:46:35.326790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:46:35.326911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:46:35.332351 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 00:46:35.332465 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 00:46:35.337200 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:46:35.337313 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:46:35.343685 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:46:35.348554 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:46:35.354119 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 00:46:35.359788 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 7 00:46:35.366304 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:46:35.379954 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 00:46:35.385639 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 00:46:35.400746 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 00:46:35.405554 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 00:46:35.405589 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:46:35.411120 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 7 00:46:35.417528 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 00:46:35.421979 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:46:35.423317 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 00:46:35.428927 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 00:46:35.434087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:46:35.434896 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 00:46:35.440224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:46:35.442938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:46:35.470798 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 00:46:35.478769 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 00:46:35.485544 systemd-journald[1393]: Time spent on flushing to /var/log/journal/d7a10be63ab2412daa106f86e369e050 is 32.681ms for 929 entries. Mar 7 00:46:35.485544 systemd-journald[1393]: System Journal (/var/log/journal/d7a10be63ab2412daa106f86e369e050) is 11.8M, max 2.6G, 2.6G free. Mar 7 00:46:35.577872 systemd-journald[1393]: Received client request to flush runtime journal. Mar 7 00:46:35.577935 systemd-journald[1393]: /var/log/journal/d7a10be63ab2412daa106f86e369e050/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Mar 7 00:46:35.577958 systemd-journald[1393]: Rotating system journal. Mar 7 00:46:35.577975 kernel: loop0: detected capacity change from 0 to 209336 Mar 7 00:46:35.486873 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 00:46:35.496041 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 00:46:35.514137 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 00:46:35.521594 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:46:35.534298 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 00:46:35.544193 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 7 00:46:35.579300 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 00:46:35.602680 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 00:46:35.608804 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 00:46:35.609416 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 7 00:46:35.635685 kernel: loop1: detected capacity change from 0 to 27936 Mar 7 00:46:35.677073 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 00:46:35.683594 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:46:35.830039 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Mar 7 00:46:35.830053 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Mar 7 00:46:35.832977 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:46:35.983584 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 00:46:35.990858 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:46:36.017912 systemd-udevd[1459]: Using default interface naming scheme 'v255'. Mar 7 00:46:36.046674 kernel: loop2: detected capacity change from 0 to 119840 Mar 7 00:46:36.196422 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:46:36.206703 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:46:36.254000 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 00:46:36.287549 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 7 00:46:36.317116 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 00:46:36.347742 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 00:46:36.420999 kernel: hv_vmbus: registering driver hv_balloon Mar 7 00:46:36.421086 kernel: hv_vmbus: registering driver hyperv_fb Mar 7 00:46:36.421101 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 7 00:46:36.426228 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 7 00:46:36.460241 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 7 00:46:36.468048 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 7 00:46:36.473068 kernel: Console: switching to colour dummy device 80x25 Mar 7 00:46:36.473733 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#239 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 00:46:36.483698 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 00:46:36.505985 systemd-networkd[1477]: lo: Link UP Mar 7 00:46:36.506372 systemd-networkd[1477]: lo: Gained carrier Mar 7 00:46:36.508009 kernel: loop3: detected capacity change from 0 to 100632 Mar 7 00:46:36.507825 systemd-networkd[1477]: Enumeration completed Mar 7 00:46:36.507932 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:46:36.508218 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:46:36.508262 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:46:36.516435 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 7 00:46:36.523286 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 00:46:36.537878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:46:36.545921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:46:36.546343 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:46:36.556576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:46:36.573678 kernel: mlx5_core 4cb2:00:02.0 enP19634s1: Link up Mar 7 00:46:36.597686 kernel: hv_netvsc 7ced8dd3-c7b9-7ced-8dd3-c7b97ced8dd3 eth0: Data path switched to VF: enP19634s1 Mar 7 00:46:36.598706 systemd-networkd[1477]: enP19634s1: Link UP Mar 7 00:46:36.598836 systemd-networkd[1477]: eth0: Link UP Mar 7 00:46:36.598840 systemd-networkd[1477]: eth0: Gained carrier Mar 7 00:46:36.598858 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:46:36.609099 systemd-networkd[1477]: enP19634s1: Gained carrier Mar 7 00:46:36.610961 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 7 00:46:36.622723 systemd-networkd[1477]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 00:46:36.635678 kernel: MACsec IEEE 802.1AE Mar 7 00:46:36.712742 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 00:46:36.721919 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 00:46:36.757779 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 00:46:36.923673 kernel: loop4: detected capacity change from 0 to 209336 Mar 7 00:46:36.939678 kernel: loop5: detected capacity change from 0 to 27936 Mar 7 00:46:36.953951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:46:36.965682 kernel: loop6: detected capacity change from 0 to 119840 Mar 7 00:46:36.977676 kernel: loop7: detected capacity change from 0 to 100632 Mar 7 00:46:36.985864 (sd-merge)[1606]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 7 00:46:36.986242 (sd-merge)[1606]: Merged extensions into '/usr'. Mar 7 00:46:36.989715 systemd[1]: Reload requested from client PID 1436 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 00:46:36.989728 systemd[1]: Reloading... Mar 7 00:46:37.051684 zram_generator::config[1653]: No configuration found. Mar 7 00:46:37.207159 systemd[1]: Reloading finished in 217 ms. Mar 7 00:46:37.234358 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 00:46:37.248735 systemd[1]: Starting ensure-sysext.service... Mar 7 00:46:37.253782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:46:37.269134 systemd[1]: Reload requested from client PID 1693 ('systemctl') (unit ensure-sysext.service)... Mar 7 00:46:37.269147 systemd[1]: Reloading... Mar 7 00:46:37.277526 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 7 00:46:37.292119 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 7 00:46:37.292513 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 00:46:37.292961 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 00:46:37.293985 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 00:46:37.294854 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Mar 7 00:46:37.294975 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Mar 7 00:46:37.298263 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:46:37.299181 systemd-tmpfiles[1694]: Skipping /boot Mar 7 00:46:37.306339 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:46:37.306442 systemd-tmpfiles[1694]: Skipping /boot Mar 7 00:46:37.328679 zram_generator::config[1725]: No configuration found. Mar 7 00:46:37.491238 systemd[1]: Reloading finished in 221 ms. Mar 7 00:46:37.501452 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:46:37.529562 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 7 00:46:37.543429 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 00:46:37.548836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:46:37.550869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:46:37.565279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:46:37.572851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:46:37.578066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:46:37.578174 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 00:46:37.580835 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 00:46:37.592094 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:46:37.598847 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 00:46:37.605110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:46:37.609813 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:46:37.616485 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:46:37.616914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:46:37.622733 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:46:37.623140 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:46:37.634565 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:46:37.637958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:46:37.644890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:46:37.653276 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:46:37.658447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:46:37.658842 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 00:46:37.662512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:46:37.662651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:46:37.667667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:46:37.667791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:46:37.673435 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:46:37.673569 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:46:37.680448 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 00:46:37.687472 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 00:46:37.700225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:46:37.701301 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:46:37.711836 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:46:37.723577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:46:37.731878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:46:37.740146 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:46:37.740252 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 00:46:37.740358 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 00:46:37.744890 systemd-resolved[1794]: Positive Trust Anchors: Mar 7 00:46:37.744905 systemd-resolved[1794]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:46:37.744926 systemd-resolved[1794]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:46:37.746453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:46:37.746608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:46:37.752745 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:46:37.753011 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:46:37.757215 augenrules[1825]: No rules Mar 7 00:46:37.758031 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 00:46:37.758268 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 7 00:46:37.761637 systemd-resolved[1794]: Using system hostname 'ci-4459.2.3-n-9877c76adf'. Mar 7 00:46:37.762847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:46:37.763034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:46:37.768912 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:46:37.774055 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:46:37.774308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:46:37.782266 systemd[1]: Finished ensure-sysext.service. Mar 7 00:46:37.788831 systemd[1]: Reached target network.target - Network. Mar 7 00:46:37.792937 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:46:37.798169 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:46:37.798316 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:46:38.218819 systemd-networkd[1477]: eth0: Gained IPv6LL Mar 7 00:46:38.221724 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 00:46:38.227430 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 00:46:38.360488 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 00:46:38.365993 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 00:46:40.735390 ldconfig[1430]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 00:46:40.748117 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 00:46:40.754283 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 00:46:40.771304 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 00:46:40.776786 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:46:40.781457 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 00:46:40.786737 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 00:46:40.792941 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 00:46:40.797847 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 00:46:40.803128 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 00:46:40.808636 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 00:46:40.808689 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:46:40.812429 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:46:40.818021 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 00:46:40.824550 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 00:46:40.830129 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 7 00:46:40.836084 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 7 00:46:40.842207 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 7 00:46:40.848471 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 00:46:40.853635 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 7 00:46:40.859815 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 00:46:40.864408 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:46:40.868417 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:46:40.872787 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:46:40.872808 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:46:40.875008 systemd[1]: Starting chronyd.service - NTP client/server... Mar 7 00:46:40.890767 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 00:46:40.897158 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 00:46:40.907806 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 00:46:40.914587 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 00:46:40.922640 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 00:46:40.928394 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 00:46:40.932583 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 00:46:40.939789 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 7 00:46:40.945025 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 7 00:46:40.946754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:46:40.947707 chronyd[1843]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Mar 7 00:46:40.953291 jq[1851]: false Mar 7 00:46:40.956314 KVP[1853]: KVP starting; pid is:1853 Mar 7 00:46:40.956612 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 00:46:40.963771 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 00:46:40.964904 KVP[1853]: KVP LIC Version: 3.1 Mar 7 00:46:40.965696 kernel: hv_utils: KVP IC version 4.0 Mar 7 00:46:40.970923 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 00:46:40.975198 chronyd[1843]: Timezone right/UTC failed leap second check, ignoring Mar 7 00:46:40.975349 chronyd[1843]: Loaded seccomp filter (level 2) Mar 7 00:46:40.977703 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 00:46:40.986582 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 00:46:40.986752 extend-filesystems[1852]: Found /dev/sda6 Mar 7 00:46:41.002090 extend-filesystems[1852]: Found /dev/sda9 Mar 7 00:46:40.998778 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 00:46:41.019951 extend-filesystems[1852]: Checking size of /dev/sda9 Mar 7 00:46:41.014358 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 00:46:41.014806 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 00:46:41.015258 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 00:46:41.024701 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 00:46:41.031586 systemd[1]: Started chronyd.service - NTP client/server. Mar 7 00:46:41.040751 jq[1880]: true Mar 7 00:46:41.042709 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 00:46:41.050130 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 00:46:41.052196 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 00:46:41.053174 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 00:46:41.053732 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 00:46:41.061022 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 00:46:41.069208 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 00:46:41.069364 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 00:46:41.075472 extend-filesystems[1852]: Old size kept for /dev/sda9 Mar 7 00:46:41.079319 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 00:46:41.079470 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 00:46:41.110397 jq[1890]: true Mar 7 00:46:41.118334 update_engine[1878]: I20260307 00:46:41.116595 1878 main.cc:92] Flatcar Update Engine starting Mar 7 00:46:41.124271 systemd-logind[1870]: New seat seat0. Mar 7 00:46:41.124994 (ntainerd)[1891]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 00:46:41.127117 systemd-logind[1870]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 7 00:46:41.127260 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 00:46:41.151214 tar[1889]: linux-arm64/LICENSE Mar 7 00:46:41.151214 tar[1889]: linux-arm64/helm Mar 7 00:46:41.237471 dbus-daemon[1846]: [system] SELinux support is enabled Mar 7 00:46:41.237764 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 00:46:41.247285 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 00:46:41.247367 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 00:46:41.256236 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 00:46:41.262810 bash[1927]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:46:41.256285 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 00:46:41.263032 update_engine[1878]: I20260307 00:46:41.262950 1878 update_check_scheduler.cc:74] Next update check in 11m36s Mar 7 00:46:41.266349 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 00:46:41.274077 dbus-daemon[1846]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 00:46:41.274384 systemd[1]: Started update-engine.service - Update Engine. Mar 7 00:46:41.282518 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 00:46:41.285055 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 00:46:41.362668 coreos-metadata[1845]: Mar 07 00:46:41.360 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 00:46:41.362668 coreos-metadata[1845]: Mar 07 00:46:41.361 INFO Fetch successful Mar 7 00:46:41.362668 coreos-metadata[1845]: Mar 07 00:46:41.361 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 7 00:46:41.366741 coreos-metadata[1845]: Mar 07 00:46:41.366 INFO Fetch successful Mar 7 00:46:41.366741 coreos-metadata[1845]: Mar 07 00:46:41.366 INFO Fetching http://168.63.129.16/machine/796a9ef6-ed55-4ffb-9fb0-17b9f4d0a674/f6f667fd%2D7f5a%2D4908%2D8d1f%2Dc9099e98dda0.%5Fci%2D4459.2.3%2Dn%2D9877c76adf?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 7 00:46:41.370720 coreos-metadata[1845]: Mar 07 00:46:41.370 INFO Fetch successful Mar 7 00:46:41.370720 coreos-metadata[1845]: Mar 07 00:46:41.370 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 7 00:46:41.392496 coreos-metadata[1845]: Mar 07 00:46:41.391 INFO Fetch successful Mar 7 00:46:41.438300 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 00:46:41.445314 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 00:46:41.585771 locksmithd[1944]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 00:46:41.598040 sshd_keygen[1879]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 00:46:41.630314 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 00:46:41.642317 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 00:46:41.654856 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 7 00:46:41.662487 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 00:46:41.662673 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 00:46:41.671003 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 00:46:41.687691 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 00:46:41.694971 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 00:46:41.701667 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 7 00:46:41.709816 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 00:46:41.720105 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 7 00:46:41.724839 tar[1889]: linux-arm64/README.md Mar 7 00:46:41.739253 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 00:46:41.837918 containerd[1891]: time="2026-03-07T00:46:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 7 00:46:41.838146 containerd[1891]: time="2026-03-07T00:46:41.837969856Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 7 00:46:41.845185 containerd[1891]: time="2026-03-07T00:46:41.845145168Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.064µs" Mar 7 00:46:41.845185 containerd[1891]: time="2026-03-07T00:46:41.845176960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 7 00:46:41.845286 containerd[1891]: time="2026-03-07T00:46:41.845196952Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 7 00:46:41.846065 containerd[1891]: time="2026-03-07T00:46:41.845346464Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 7 00:46:41.846065 containerd[1891]: time="2026-03-07T00:46:41.845362176Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 7 00:46:41.846065 containerd[1891]: time="2026-03-07T00:46:41.845383720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 7 00:46:41.846065 containerd[1891]: time="2026-03-07T00:46:41.845429096Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 7 00:46:41.846065 containerd[1891]: time="2026-03-07T00:46:41.845436160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 7 00:46:41.846065 containerd[1891]: time="2026-03-07T00:46:41.845621264Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 7 00:46:41.846065 containerd[1891]: time="2026-03-07T00:46:41.845636352Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 7 00:46:41.846065 containerd[1891]: time="2026-03-07T00:46:41.845643456Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 7 00:46:41.846065 containerd[1891]: time="2026-03-07T00:46:41.845648160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 7 00:46:41.846065 containerd[1891]: time="2026-03-07T00:46:41.845722424Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 7 00:46:41.846065 containerd[1891]: time="2026-03-07T00:46:41.845900256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 7 00:46:41.846250 containerd[1891]: time="2026-03-07T00:46:41.845922032Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 7 00:46:41.846250 containerd[1891]: time="2026-03-07T00:46:41.845928640Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 7 00:46:41.846250 containerd[1891]: time="2026-03-07T00:46:41.845956056Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 7 00:46:41.846250 containerd[1891]: time="2026-03-07T00:46:41.846127968Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 7 00:46:41.846250 containerd[1891]: time="2026-03-07T00:46:41.846187688Z" level=info msg="metadata content store policy set" policy=shared Mar 7 00:46:41.859891 containerd[1891]: time="2026-03-07T00:46:41.859849696Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 7 00:46:41.859959 containerd[1891]: time="2026-03-07T00:46:41.859902584Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 7 00:46:41.859959 containerd[1891]: time="2026-03-07T00:46:41.859912640Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 7 00:46:41.859959 containerd[1891]: time="2026-03-07T00:46:41.859920920Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 7 00:46:41.859959 containerd[1891]: time="2026-03-07T00:46:41.859928560Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 7 00:46:41.859959 containerd[1891]: time="2026-03-07T00:46:41.859938296Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 7 00:46:41.859959 containerd[1891]: time="2026-03-07T00:46:41.859947088Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 7 00:46:41.859959 containerd[1891]: time="2026-03-07T00:46:41.859954216Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 7 00:46:41.860059 containerd[1891]: time="2026-03-07T00:46:41.859962776Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 7 00:46:41.860059 containerd[1891]: time="2026-03-07T00:46:41.859969080Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 7 00:46:41.860059 containerd[1891]: time="2026-03-07T00:46:41.859974560Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 7 00:46:41.860059 containerd[1891]: time="2026-03-07T00:46:41.859983248Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 7 00:46:41.860104 containerd[1891]: time="2026-03-07T00:46:41.860094392Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 7 00:46:41.860116 containerd[1891]: time="2026-03-07T00:46:41.860107760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 7 00:46:41.860128 containerd[1891]: time="2026-03-07T00:46:41.860121896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 7 00:46:41.860145 containerd[1891]: time="2026-03-07T00:46:41.860129576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 7 00:46:41.860145 containerd[1891]: time="2026-03-07T00:46:41.860140472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 7 00:46:41.860166 containerd[1891]: time="2026-03-07T00:46:41.860147368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 7 00:46:41.860166 containerd[1891]: time="2026-03-07T00:46:41.860154736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 7 00:46:41.860166 containerd[1891]: time="2026-03-07T00:46:41.860160880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 7 00:46:41.860234 containerd[1891]: time="2026-03-07T00:46:41.860167928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 7 00:46:41.860234 containerd[1891]: time="2026-03-07T00:46:41.860173960Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 7 00:46:41.860234 containerd[1891]: time="2026-03-07T00:46:41.860179920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 7 00:46:41.860234 containerd[1891]: time="2026-03-07T00:46:41.860221672Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 7 00:46:41.860279 containerd[1891]: time="2026-03-07T00:46:41.860249520Z" level=info msg="Start snapshots syncer" Mar 7 00:46:41.860279 containerd[1891]: time="2026-03-07T00:46:41.860267152Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 7 00:46:41.860495 containerd[1891]: time="2026-03-07T00:46:41.860455160Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 7 00:46:41.860583 containerd[1891]: time="2026-03-07T00:46:41.860497640Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 7 00:46:41.860583 containerd[1891]: time="2026-03-07T00:46:41.860528928Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 7 00:46:41.860634 containerd[1891]: time="2026-03-07T00:46:41.860618304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 7 00:46:41.860650 containerd[1891]: time="2026-03-07T00:46:41.860636416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 7 00:46:41.860650 containerd[1891]: time="2026-03-07T00:46:41.860643456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860650848Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860679680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860686632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860695168Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860712520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860719696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860727344Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860749248Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860758840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860764728Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860783872Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860788648Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860793784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 7 00:46:41.861185 containerd[1891]: time="2026-03-07T00:46:41.860800328Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 7 00:46:41.861422 containerd[1891]: time="2026-03-07T00:46:41.860811728Z" level=info msg="runtime interface created" Mar 7 00:46:41.861422 containerd[1891]: time="2026-03-07T00:46:41.860815232Z" level=info msg="created NRI interface" Mar 7 00:46:41.861422 containerd[1891]: time="2026-03-07T00:46:41.860821128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 7 00:46:41.861422 containerd[1891]: time="2026-03-07T00:46:41.860828592Z" level=info msg="Connect containerd service" Mar 7 00:46:41.861422 containerd[1891]: time="2026-03-07T00:46:41.860843560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 00:46:41.861422 containerd[1891]: time="2026-03-07T00:46:41.861382704Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:46:41.959460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:46:41.972134 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:46:42.274940 containerd[1891]: time="2026-03-07T00:46:42.274834912Z" level=info msg="Start subscribing containerd event" Mar 7 00:46:42.275089 containerd[1891]: time="2026-03-07T00:46:42.275074536Z" level=info msg="Start recovering state" Mar 7 00:46:42.275181 containerd[1891]: time="2026-03-07T00:46:42.274888504Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 00:46:42.275222 containerd[1891]: time="2026-03-07T00:46:42.275210976Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 00:46:42.275320 containerd[1891]: time="2026-03-07T00:46:42.275305624Z" level=info msg="Start event monitor" Mar 7 00:46:42.275687 containerd[1891]: time="2026-03-07T00:46:42.275667464Z" level=info msg="Start cni network conf syncer for default" Mar 7 00:46:42.275748 containerd[1891]: time="2026-03-07T00:46:42.275737536Z" level=info msg="Start streaming server" Mar 7 00:46:42.275787 containerd[1891]: time="2026-03-07T00:46:42.275778392Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 7 00:46:42.275838 containerd[1891]: time="2026-03-07T00:46:42.275828336Z" level=info msg="runtime interface starting up..." Mar 7 00:46:42.275870 containerd[1891]: time="2026-03-07T00:46:42.275862072Z" level=info msg="starting plugins..." Mar 7 00:46:42.275921 containerd[1891]: time="2026-03-07T00:46:42.275910232Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 7 00:46:42.276101 containerd[1891]: time="2026-03-07T00:46:42.276086728Z" level=info msg="containerd successfully booted in 0.438936s" Mar 7 00:46:42.276202 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 00:46:42.282275 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 00:46:42.286982 systemd[1]: Startup finished in 1.723s (kernel) + 11.975s (initrd) + 10.817s (userspace) = 24.516s. Mar 7 00:46:42.341505 kubelet[2040]: E0307 00:46:42.341465 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:46:42.347017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:46:42.347120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:46:42.347701 systemd[1]: kubelet.service: Consumed 523ms CPU time, 258.5M memory peak. Mar 7 00:46:42.587435 login[2018]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 7 00:46:42.588872 login[2019]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:46:42.597838 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 00:46:42.600760 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 00:46:42.602924 systemd-logind[1870]: New session 2 of user core. Mar 7 00:46:42.634685 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 00:46:42.636387 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 00:46:42.648879 (systemd)[2058]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 00:46:42.651127 systemd-logind[1870]: New session c1 of user core. Mar 7 00:46:42.775457 systemd[2058]: Queued start job for default target default.target. Mar 7 00:46:42.785441 systemd[2058]: Created slice app.slice - User Application Slice. Mar 7 00:46:42.785465 systemd[2058]: Reached target paths.target - Paths. Mar 7 00:46:42.785650 systemd[2058]: Reached target timers.target - Timers. Mar 7 00:46:42.786745 systemd[2058]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 00:46:42.794261 systemd[2058]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 00:46:42.794308 systemd[2058]: Reached target sockets.target - Sockets. Mar 7 00:46:42.794337 systemd[2058]: Reached target basic.target - Basic System. Mar 7 00:46:42.794358 systemd[2058]: Reached target default.target - Main User Target. Mar 7 00:46:42.794378 systemd[2058]: Startup finished in 137ms. Mar 7 00:46:42.794473 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 00:46:42.796412 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 00:46:43.531186 waagent[2022]: 2026-03-07T00:46:43.531105Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 7 00:46:43.536170 waagent[2022]: 2026-03-07T00:46:43.536114Z INFO Daemon Daemon OS: flatcar 4459.2.3 Mar 7 00:46:43.539618 waagent[2022]: 2026-03-07T00:46:43.539579Z INFO Daemon Daemon Python: 3.11.13 Mar 7 00:46:43.543088 waagent[2022]: 2026-03-07T00:46:43.543032Z INFO Daemon Daemon Run daemon Mar 7 00:46:43.546312 waagent[2022]: 2026-03-07T00:46:43.546273Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.3' Mar 7 00:46:43.553420 waagent[2022]: 2026-03-07T00:46:43.553380Z INFO Daemon Daemon Using waagent for provisioning Mar 7 00:46:43.557585 waagent[2022]: 2026-03-07T00:46:43.557547Z INFO Daemon Daemon Activate resource disk Mar 7 00:46:43.561180 waagent[2022]: 2026-03-07T00:46:43.561151Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 7 00:46:43.569978 waagent[2022]: 2026-03-07T00:46:43.569941Z INFO Daemon Daemon Found device: None Mar 7 00:46:43.573451 waagent[2022]: 2026-03-07T00:46:43.573421Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 7 00:46:43.579907 waagent[2022]: 2026-03-07T00:46:43.579878Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 7 00:46:43.587824 login[2018]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:46:43.589084 waagent[2022]: 2026-03-07T00:46:43.588819Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 00:46:43.593424 waagent[2022]: 2026-03-07T00:46:43.593387Z INFO Daemon Daemon Running default provisioning handler Mar 7 00:46:43.600486 systemd-logind[1870]: New session 1 of user core. Mar 7 00:46:43.604757 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 00:46:43.608470 waagent[2022]: 2026-03-07T00:46:43.608021Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 7 00:46:43.620702 waagent[2022]: 2026-03-07T00:46:43.620636Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 7 00:46:43.628913 waagent[2022]: 2026-03-07T00:46:43.628684Z INFO Daemon Daemon cloud-init is enabled: False Mar 7 00:46:43.633262 waagent[2022]: 2026-03-07T00:46:43.633019Z INFO Daemon Daemon Copying ovf-env.xml Mar 7 00:46:43.689316 waagent[2022]: 2026-03-07T00:46:43.688724Z INFO Daemon Daemon Successfully mounted dvd Mar 7 00:46:43.719581 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 7 00:46:43.721860 waagent[2022]: 2026-03-07T00:46:43.721795Z INFO Daemon Daemon Detect protocol endpoint Mar 7 00:46:43.725789 waagent[2022]: 2026-03-07T00:46:43.725749Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 00:46:43.730208 waagent[2022]: 2026-03-07T00:46:43.730174Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 7 00:46:43.735084 waagent[2022]: 2026-03-07T00:46:43.735054Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 7 00:46:43.739296 waagent[2022]: 2026-03-07T00:46:43.739264Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 7 00:46:43.743042 waagent[2022]: 2026-03-07T00:46:43.743016Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 7 00:46:43.786381 waagent[2022]: 2026-03-07T00:46:43.786295Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 7 00:46:43.791462 waagent[2022]: 2026-03-07T00:46:43.791441Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 7 00:46:43.795419 waagent[2022]: 2026-03-07T00:46:43.795396Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 7 00:46:43.975274 waagent[2022]: 2026-03-07T00:46:43.975179Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 7 00:46:43.980712 waagent[2022]: 2026-03-07T00:46:43.980661Z INFO Daemon Daemon Forcing an update of the goal state. Mar 7 00:46:43.991051 waagent[2022]: 2026-03-07T00:46:43.991003Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 00:46:44.014566 waagent[2022]: 2026-03-07T00:46:44.014525Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 7 00:46:44.018925 waagent[2022]: 2026-03-07T00:46:44.018890Z INFO Daemon Mar 7 00:46:44.020982 waagent[2022]: 2026-03-07T00:46:44.020954Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6bf59bb5-c93f-4068-9c6f-643d5c91db3d eTag: 14973193496540481000 source: Fabric] Mar 7 00:46:44.030181 waagent[2022]: 2026-03-07T00:46:44.030146Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 7 00:46:44.035253 waagent[2022]: 2026-03-07T00:46:44.035222Z INFO Daemon Mar 7 00:46:44.037462 waagent[2022]: 2026-03-07T00:46:44.037406Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 7 00:46:44.047149 waagent[2022]: 2026-03-07T00:46:44.047120Z INFO Daemon Daemon Downloading artifacts profile blob Mar 7 00:46:44.108905 waagent[2022]: 2026-03-07T00:46:44.108826Z INFO Daemon Downloaded certificate {'thumbprint': 'E368351BEF7C0E5BED08E682161C23B8ECDE587C', 'hasPrivateKey': True} Mar 7 00:46:44.116332 waagent[2022]: 2026-03-07T00:46:44.116289Z INFO Daemon Fetch goal state completed Mar 7 00:46:44.127053 waagent[2022]: 2026-03-07T00:46:44.127015Z INFO Daemon Daemon Starting provisioning Mar 7 00:46:44.131138 waagent[2022]: 2026-03-07T00:46:44.131097Z INFO Daemon Daemon Handle ovf-env.xml. Mar 7 00:46:44.134907 waagent[2022]: 2026-03-07T00:46:44.134879Z INFO Daemon Daemon Set hostname [ci-4459.2.3-n-9877c76adf] Mar 7 00:46:44.140909 waagent[2022]: 2026-03-07T00:46:44.140865Z INFO Daemon Daemon Publish hostname [ci-4459.2.3-n-9877c76adf] Mar 7 00:46:44.145591 waagent[2022]: 2026-03-07T00:46:44.145555Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 7 00:46:44.150206 waagent[2022]: 2026-03-07T00:46:44.150174Z INFO Daemon Daemon Primary interface is [eth0] Mar 7 00:46:44.176959 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:46:44.176964 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:46:44.176998 systemd-networkd[1477]: eth0: DHCP lease lost Mar 7 00:46:44.177975 waagent[2022]: 2026-03-07T00:46:44.177912Z INFO Daemon Daemon Create user account if not exists Mar 7 00:46:44.182201 waagent[2022]: 2026-03-07T00:46:44.182162Z INFO Daemon Daemon User core already exists, skip useradd Mar 7 00:46:44.186391 waagent[2022]: 2026-03-07T00:46:44.186356Z INFO Daemon Daemon Configure sudoer Mar 7 00:46:44.194223 waagent[2022]: 2026-03-07T00:46:44.194168Z INFO Daemon Daemon Configure sshd Mar 7 00:46:44.200975 waagent[2022]: 2026-03-07T00:46:44.200923Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 7 00:46:44.210420 waagent[2022]: 2026-03-07T00:46:44.210379Z INFO Daemon Daemon Deploy ssh public key. Mar 7 00:46:44.214490 systemd-networkd[1477]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 00:46:45.313573 waagent[2022]: 2026-03-07T00:46:45.313506Z INFO Daemon Daemon Provisioning complete Mar 7 00:46:45.328089 waagent[2022]: 2026-03-07T00:46:45.328050Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 7 00:46:45.332845 waagent[2022]: 2026-03-07T00:46:45.332812Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 7 00:46:45.340622 waagent[2022]: 2026-03-07T00:46:45.340589Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 7 00:46:45.439058 waagent[2108]: 2026-03-07T00:46:45.438987Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 7 00:46:45.439338 waagent[2108]: 2026-03-07T00:46:45.439120Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.3 Mar 7 00:46:45.439338 waagent[2108]: 2026-03-07T00:46:45.439160Z INFO ExtHandler ExtHandler Python: 3.11.13 Mar 7 00:46:45.439338 waagent[2108]: 2026-03-07T00:46:45.439195Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 7 00:46:45.478109 waagent[2108]: 2026-03-07T00:46:45.478039Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Mar 7 00:46:45.478277 waagent[2108]: 2026-03-07T00:46:45.478246Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 00:46:45.478318 waagent[2108]: 2026-03-07T00:46:45.478299Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 00:46:45.484178 waagent[2108]: 2026-03-07T00:46:45.484132Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 00:46:45.489531 waagent[2108]: 2026-03-07T00:46:45.489497Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 7 00:46:45.489930 waagent[2108]: 2026-03-07T00:46:45.489897Z INFO ExtHandler Mar 7 00:46:45.489984 waagent[2108]: 2026-03-07T00:46:45.489965Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 39f97e84-9844-4aca-87f1-a54a58db9960 eTag: 14973193496540481000 source: Fabric] Mar 7 00:46:45.490200 waagent[2108]: 2026-03-07T00:46:45.490174Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 7 00:46:45.490591 waagent[2108]: 2026-03-07T00:46:45.490562Z INFO ExtHandler Mar 7 00:46:45.490621 waagent[2108]: 2026-03-07T00:46:45.490612Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 7 00:46:45.494333 waagent[2108]: 2026-03-07T00:46:45.494305Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 7 00:46:45.549997 waagent[2108]: 2026-03-07T00:46:45.549930Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E368351BEF7C0E5BED08E682161C23B8ECDE587C', 'hasPrivateKey': True} Mar 7 00:46:45.550369 waagent[2108]: 2026-03-07T00:46:45.550334Z INFO ExtHandler Fetch goal state completed Mar 7 00:46:45.564059 waagent[2108]: 2026-03-07T00:46:45.563969Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Mar 7 00:46:45.567400 waagent[2108]: 2026-03-07T00:46:45.567354Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2108 Mar 7 00:46:45.567501 waagent[2108]: 2026-03-07T00:46:45.567475Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 7 00:46:45.567783 waagent[2108]: 2026-03-07T00:46:45.567753Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 7 00:46:45.568869 waagent[2108]: 2026-03-07T00:46:45.568833Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] Mar 7 00:46:45.569175 waagent[2108]: 2026-03-07T00:46:45.569144Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 7 00:46:45.569289 waagent[2108]: 2026-03-07T00:46:45.569266Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 7 00:46:45.569730 waagent[2108]: 2026-03-07T00:46:45.569701Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 7 00:46:45.618896 waagent[2108]: 2026-03-07T00:46:45.618857Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 7 00:46:45.619083 waagent[2108]: 2026-03-07T00:46:45.619052Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 7 00:46:45.624058 waagent[2108]: 2026-03-07T00:46:45.624023Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 7 00:46:45.628896 systemd[1]: Reload requested from client PID 2123 ('systemctl') (unit waagent.service)... Mar 7 00:46:45.629096 systemd[1]: Reloading... Mar 7 00:46:45.695675 zram_generator::config[2162]: No configuration found. Mar 7 00:46:45.847511 systemd[1]: Reloading finished in 218 ms. Mar 7 00:46:45.865411 waagent[2108]: 2026-03-07T00:46:45.864754Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 7 00:46:45.865411 waagent[2108]: 2026-03-07T00:46:45.864899Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 7 00:46:46.106368 waagent[2108]: 2026-03-07T00:46:46.106243Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 7 00:46:46.106785 waagent[2108]: 2026-03-07T00:46:46.106746Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 7 00:46:46.107518 waagent[2108]: 2026-03-07T00:46:46.107478Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 7 00:46:46.107894 waagent[2108]: 2026-03-07T00:46:46.107859Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 7 00:46:46.107957 waagent[2108]: 2026-03-07T00:46:46.107912Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 00:46:46.108006 waagent[2108]: 2026-03-07T00:46:46.107982Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 00:46:46.108168 waagent[2108]: 2026-03-07T00:46:46.108141Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 7 00:46:46.108487 waagent[2108]: 2026-03-07T00:46:46.108453Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 7 00:46:46.108585 waagent[2108]: 2026-03-07T00:46:46.108545Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 7 00:46:46.108702 waagent[2108]: 2026-03-07T00:46:46.108672Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 00:46:46.108849 waagent[2108]: 2026-03-07T00:46:46.108823Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 7 00:46:46.108849 waagent[2108]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 7 00:46:46.108849 waagent[2108]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 7 00:46:46.108849 waagent[2108]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 7 00:46:46.108849 waagent[2108]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 7 00:46:46.108849 waagent[2108]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 00:46:46.108849 waagent[2108]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 00:46:46.109235 waagent[2108]: 2026-03-07T00:46:46.109200Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 7 00:46:46.109274 waagent[2108]: 2026-03-07T00:46:46.109242Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 00:46:46.109414 waagent[2108]: 2026-03-07T00:46:46.109360Z INFO EnvHandler ExtHandler Configure routes Mar 7 00:46:46.109448 waagent[2108]: 2026-03-07T00:46:46.109437Z INFO EnvHandler ExtHandler Gateway:None Mar 7 00:46:46.109482 waagent[2108]: 2026-03-07T00:46:46.109466Z INFO EnvHandler ExtHandler Routes:None Mar 7 00:46:46.109605 waagent[2108]: 2026-03-07T00:46:46.109572Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 7 00:46:46.109764 waagent[2108]: 2026-03-07T00:46:46.109734Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 7 00:46:46.117596 waagent[2108]: 2026-03-07T00:46:46.117537Z INFO ExtHandler ExtHandler Mar 7 00:46:46.117864 waagent[2108]: 2026-03-07T00:46:46.117822Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 426ab14a-e1b5-4b5e-a4c9-b5ca0616218d correlation 28381827-e0ae-4f90-914c-130eede5fae5 created: 2026-03-07T00:45:45.885773Z] Mar 7 00:46:46.118312 waagent[2108]: 2026-03-07T00:46:46.118266Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 7 00:46:46.118902 waagent[2108]: 2026-03-07T00:46:46.118861Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 7 00:46:46.147797 waagent[2108]: 2026-03-07T00:46:46.147745Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Mar 7 00:46:46.147797 waagent[2108]: Try `iptables -h' or 'iptables --help' for more information.) Mar 7 00:46:46.148336 waagent[2108]: 2026-03-07T00:46:46.148302Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 37478176-2909-4CEB-ACFF-2DA7A8BCC05A;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 7 00:46:46.178709 waagent[2108]: 2026-03-07T00:46:46.178627Z INFO MonitorHandler ExtHandler Network interfaces: Mar 7 00:46:46.178709 waagent[2108]: Executing ['ip', '-a', '-o', 'link']: Mar 7 00:46:46.178709 waagent[2108]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 7 00:46:46.178709 waagent[2108]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d3:c7:b9 brd ff:ff:ff:ff:ff:ff Mar 7 00:46:46.178709 waagent[2108]: 3: enP19634s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d3:c7:b9 brd ff:ff:ff:ff:ff:ff\ altname enP19634p0s2 Mar 7 00:46:46.178709 waagent[2108]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 7 00:46:46.178709 waagent[2108]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 7 00:46:46.178709 waagent[2108]: 2: eth0 inet 10.200.20.17/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 7 00:46:46.178709 waagent[2108]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 7 00:46:46.178709 waagent[2108]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 7 00:46:46.178709 waagent[2108]: 2: eth0 inet6 fe80::7eed:8dff:fed3:c7b9/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 7 00:46:46.205822 waagent[2108]: 2026-03-07T00:46:46.205769Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 7 00:46:46.205822 waagent[2108]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 00:46:46.205822 waagent[2108]: pkts bytes target prot opt in out source destination Mar 7 00:46:46.205822 waagent[2108]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 00:46:46.205822 waagent[2108]: pkts bytes target prot opt in out source destination Mar 7 00:46:46.205822 waagent[2108]: Chain OUTPUT (policy ACCEPT 5 packets, 646 bytes) Mar 7 00:46:46.205822 waagent[2108]: pkts bytes target prot opt in out source destination Mar 7 00:46:46.205822 waagent[2108]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 00:46:46.205822 waagent[2108]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 00:46:46.205822 waagent[2108]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 00:46:46.208458 waagent[2108]: 2026-03-07T00:46:46.208411Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 7 00:46:46.208458 waagent[2108]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 00:46:46.208458 waagent[2108]: pkts bytes target prot opt in out source destination Mar 7 00:46:46.208458 waagent[2108]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 00:46:46.208458 waagent[2108]: pkts bytes target prot opt in out source destination Mar 7 00:46:46.208458 waagent[2108]: Chain OUTPUT (policy ACCEPT 5 packets, 646 bytes) Mar 7 00:46:46.208458 waagent[2108]: pkts bytes target prot opt in out source destination Mar 7 00:46:46.208458 waagent[2108]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 00:46:46.208458 waagent[2108]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 00:46:46.208458 waagent[2108]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 00:46:46.208641 waagent[2108]: 2026-03-07T00:46:46.208618Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 7 00:46:52.597883 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 00:46:52.599247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:46:52.717299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:46:52.722920 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:46:52.851615 kubelet[2257]: E0307 00:46:52.851499 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:46:52.854444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:46:52.854556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:46:52.855046 systemd[1]: kubelet.service: Consumed 113ms CPU time, 107.7M memory peak. Mar 7 00:47:01.886149 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 00:47:01.887131 systemd[1]: Started sshd@0-10.200.20.17:22-10.200.16.10:38706.service - OpenSSH per-connection server daemon (10.200.16.10:38706). Mar 7 00:47:02.512694 sshd[2265]: Accepted publickey for core from 10.200.16.10 port 38706 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:47:02.513466 sshd-session[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:47:02.517468 systemd-logind[1870]: New session 3 of user core. Mar 7 00:47:02.523774 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 00:47:02.834846 systemd[1]: Started sshd@1-10.200.20.17:22-10.200.16.10:38710.service - OpenSSH per-connection server daemon (10.200.16.10:38710). Mar 7 00:47:02.878100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 00:47:02.879535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:47:03.140861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:47:03.154144 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:47:03.183569 kubelet[2282]: E0307 00:47:03.183522 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:47:03.185749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:47:03.185858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:47:03.186163 systemd[1]: kubelet.service: Consumed 109ms CPU time, 105M memory peak. Mar 7 00:47:03.259864 sshd[2271]: Accepted publickey for core from 10.200.16.10 port 38710 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:47:03.261335 sshd-session[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:47:03.264879 systemd-logind[1870]: New session 4 of user core. Mar 7 00:47:03.271787 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 00:47:03.497291 sshd[2288]: Connection closed by 10.200.16.10 port 38710 Mar 7 00:47:03.498055 sshd-session[2271]: pam_unix(sshd:session): session closed for user core Mar 7 00:47:03.501214 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 00:47:03.501215 systemd-logind[1870]: Session 4 logged out. Waiting for processes to exit. Mar 7 00:47:03.502487 systemd[1]: sshd@1-10.200.20.17:22-10.200.16.10:38710.service: Deactivated successfully. Mar 7 00:47:03.587386 systemd[1]: Started sshd@2-10.200.20.17:22-10.200.16.10:38726.service - OpenSSH per-connection server daemon (10.200.16.10:38726). Mar 7 00:47:04.018117 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 38726 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:47:04.019261 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:47:04.022499 systemd-logind[1870]: New session 5 of user core. Mar 7 00:47:04.029888 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 00:47:04.248487 sshd[2297]: Connection closed by 10.200.16.10 port 38726 Mar 7 00:47:04.249084 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Mar 7 00:47:04.252868 systemd[1]: sshd@2-10.200.20.17:22-10.200.16.10:38726.service: Deactivated successfully. Mar 7 00:47:04.254564 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 00:47:04.256232 systemd-logind[1870]: Session 5 logged out. Waiting for processes to exit. Mar 7 00:47:04.257799 systemd-logind[1870]: Removed session 5. Mar 7 00:47:04.341571 systemd[1]: Started sshd@3-10.200.20.17:22-10.200.16.10:38728.service - OpenSSH per-connection server daemon (10.200.16.10:38728). Mar 7 00:47:04.766121 sshd[2303]: Accepted publickey for core from 10.200.16.10 port 38728 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:47:04.769140 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:47:04.771764 chronyd[1843]: Selected source PHC0 Mar 7 00:47:04.773325 systemd-logind[1870]: New session 6 of user core. Mar 7 00:47:04.778793 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 00:47:05.000478 sshd[2306]: Connection closed by 10.200.16.10 port 38728 Mar 7 00:47:05.001062 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Mar 7 00:47:05.004236 systemd[1]: sshd@3-10.200.20.17:22-10.200.16.10:38728.service: Deactivated successfully. Mar 7 00:47:05.005644 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 00:47:05.006826 systemd-logind[1870]: Session 6 logged out. Waiting for processes to exit. Mar 7 00:47:05.007981 systemd-logind[1870]: Removed session 6. Mar 7 00:47:05.088368 systemd[1]: Started sshd@4-10.200.20.17:22-10.200.16.10:38732.service - OpenSSH per-connection server daemon (10.200.16.10:38732). Mar 7 00:47:05.511396 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 38732 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:47:05.512570 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:47:05.516009 systemd-logind[1870]: New session 7 of user core. Mar 7 00:47:05.524841 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 00:47:05.803689 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 00:47:05.803929 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:47:05.833523 sudo[2316]: pam_unix(sudo:session): session closed for user root Mar 7 00:47:05.910950 sshd[2315]: Connection closed by 10.200.16.10 port 38732 Mar 7 00:47:05.911625 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Mar 7 00:47:05.915615 systemd[1]: sshd@4-10.200.20.17:22-10.200.16.10:38732.service: Deactivated successfully. Mar 7 00:47:05.918349 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 00:47:05.919246 systemd-logind[1870]: Session 7 logged out. Waiting for processes to exit. Mar 7 00:47:05.920609 systemd-logind[1870]: Removed session 7. Mar 7 00:47:06.000637 systemd[1]: Started sshd@5-10.200.20.17:22-10.200.16.10:38738.service - OpenSSH per-connection server daemon (10.200.16.10:38738). Mar 7 00:47:06.421938 sshd[2322]: Accepted publickey for core from 10.200.16.10 port 38738 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:47:06.423109 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:47:06.426687 systemd-logind[1870]: New session 8 of user core. Mar 7 00:47:06.436802 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 00:47:06.580134 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 00:47:06.580348 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:47:06.586530 sudo[2327]: pam_unix(sudo:session): session closed for user root Mar 7 00:47:06.590474 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 7 00:47:06.591003 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:47:06.598914 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 7 00:47:06.634506 augenrules[2349]: No rules Mar 7 00:47:06.635763 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 00:47:06.635939 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 7 00:47:06.637372 sudo[2326]: pam_unix(sudo:session): session closed for user root Mar 7 00:47:06.714544 sshd[2325]: Connection closed by 10.200.16.10 port 38738 Mar 7 00:47:06.715276 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Mar 7 00:47:06.718195 systemd-logind[1870]: Session 8 logged out. Waiting for processes to exit. Mar 7 00:47:06.718431 systemd[1]: sshd@5-10.200.20.17:22-10.200.16.10:38738.service: Deactivated successfully. Mar 7 00:47:06.719734 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 00:47:06.721484 systemd-logind[1870]: Removed session 8. Mar 7 00:47:06.802364 systemd[1]: Started sshd@6-10.200.20.17:22-10.200.16.10:38746.service - OpenSSH per-connection server daemon (10.200.16.10:38746). Mar 7 00:47:07.221693 sshd[2358]: Accepted publickey for core from 10.200.16.10 port 38746 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:47:07.222807 sshd-session[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:47:07.226164 systemd-logind[1870]: New session 9 of user core. Mar 7 00:47:07.235021 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 00:47:07.378956 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 00:47:07.379164 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:47:08.665380 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 00:47:08.674953 (dockerd)[2380]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 00:47:09.898568 dockerd[2380]: time="2026-03-07T00:47:09.897885123Z" level=info msg="Starting up" Mar 7 00:47:09.900048 dockerd[2380]: time="2026-03-07T00:47:09.900016243Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 7 00:47:09.908922 dockerd[2380]: time="2026-03-07T00:47:09.908887707Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 7 00:47:09.984055 dockerd[2380]: time="2026-03-07T00:47:09.984006283Z" level=info msg="Loading containers: start." Mar 7 00:47:10.011680 kernel: Initializing XFRM netlink socket Mar 7 00:47:10.310343 systemd-networkd[1477]: docker0: Link UP Mar 7 00:47:10.325299 dockerd[2380]: time="2026-03-07T00:47:10.325252819Z" level=info msg="Loading containers: done." Mar 7 00:47:10.335254 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3894197529-merged.mount: Deactivated successfully. Mar 7 00:47:10.349002 dockerd[2380]: time="2026-03-07T00:47:10.348952019Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 00:47:10.349166 dockerd[2380]: time="2026-03-07T00:47:10.349050763Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 7 00:47:10.349166 dockerd[2380]: time="2026-03-07T00:47:10.349143211Z" level=info msg="Initializing buildkit" Mar 7 00:47:10.395616 dockerd[2380]: time="2026-03-07T00:47:10.395567043Z" level=info msg="Completed buildkit initialization" Mar 7 00:47:10.401501 dockerd[2380]: time="2026-03-07T00:47:10.401459203Z" level=info msg="Daemon has completed initialization" Mar 7 00:47:10.401545 dockerd[2380]: time="2026-03-07T00:47:10.401518731Z" level=info msg="API listen on /run/docker.sock" Mar 7 00:47:10.402002 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 00:47:10.738233 containerd[1891]: time="2026-03-07T00:47:10.737885171Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 00:47:11.628096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154355797.mount: Deactivated successfully. Mar 7 00:47:13.299686 containerd[1891]: time="2026-03-07T00:47:13.299204019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:13.302582 containerd[1891]: time="2026-03-07T00:47:13.302553936Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 7 00:47:13.305620 containerd[1891]: time="2026-03-07T00:47:13.305594402Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:13.310826 containerd[1891]: time="2026-03-07T00:47:13.310770713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:13.311678 containerd[1891]: time="2026-03-07T00:47:13.311464372Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.573542055s" Mar 7 00:47:13.311678 containerd[1891]: time="2026-03-07T00:47:13.311493294Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 7 00:47:13.312398 containerd[1891]: time="2026-03-07T00:47:13.312354118Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 00:47:13.378086 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 00:47:13.379861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:47:13.478209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:47:13.485038 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:47:13.510509 kubelet[2655]: E0307 00:47:13.510461 2655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:47:13.512466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:47:13.512565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:47:13.512939 systemd[1]: kubelet.service: Consumed 105ms CPU time, 104.5M memory peak. Mar 7 00:47:16.048772 containerd[1891]: time="2026-03-07T00:47:16.048721928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:16.052139 containerd[1891]: time="2026-03-07T00:47:16.052109603Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 7 00:47:16.055398 containerd[1891]: time="2026-03-07T00:47:16.055372946Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:16.060098 containerd[1891]: time="2026-03-07T00:47:16.060068462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:16.060641 containerd[1891]: time="2026-03-07T00:47:16.060617944Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 2.748216099s" Mar 7 00:47:16.060667 containerd[1891]: time="2026-03-07T00:47:16.060646769Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 7 00:47:16.061206 containerd[1891]: time="2026-03-07T00:47:16.061185522Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 00:47:18.160555 containerd[1891]: time="2026-03-07T00:47:18.160487724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:18.163557 containerd[1891]: time="2026-03-07T00:47:18.163523756Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 7 00:47:18.166245 containerd[1891]: time="2026-03-07T00:47:18.166205385Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:18.172579 containerd[1891]: time="2026-03-07T00:47:18.172534304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:18.173153 containerd[1891]: time="2026-03-07T00:47:18.172931909Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 2.11171725s" Mar 7 00:47:18.173153 containerd[1891]: time="2026-03-07T00:47:18.172961174Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 7 00:47:18.173728 containerd[1891]: time="2026-03-07T00:47:18.173709325Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 00:47:19.170768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2112286230.mount: Deactivated successfully. Mar 7 00:47:19.424684 containerd[1891]: time="2026-03-07T00:47:19.423925109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:19.426618 containerd[1891]: time="2026-03-07T00:47:19.426590025Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 7 00:47:19.429821 containerd[1891]: time="2026-03-07T00:47:19.429782670Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:19.434717 containerd[1891]: time="2026-03-07T00:47:19.434013555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:19.434717 containerd[1891]: time="2026-03-07T00:47:19.434425384Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.260632944s" Mar 7 00:47:19.434717 containerd[1891]: time="2026-03-07T00:47:19.434446993Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 7 00:47:19.435040 containerd[1891]: time="2026-03-07T00:47:19.434993226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 00:47:20.041270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330027120.mount: Deactivated successfully. Mar 7 00:47:21.423684 containerd[1891]: time="2026-03-07T00:47:21.423519397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:21.427210 containerd[1891]: time="2026-03-07T00:47:21.427179552Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 7 00:47:21.429920 containerd[1891]: time="2026-03-07T00:47:21.429883733Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:21.434973 containerd[1891]: time="2026-03-07T00:47:21.434805095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:21.435478 containerd[1891]: time="2026-03-07T00:47:21.435448836Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.000431897s" Mar 7 00:47:21.435478 containerd[1891]: time="2026-03-07T00:47:21.435476204Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 7 00:47:21.436074 containerd[1891]: time="2026-03-07T00:47:21.436055439Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 00:47:22.010492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740828389.mount: Deactivated successfully. Mar 7 00:47:22.028991 containerd[1891]: time="2026-03-07T00:47:22.028476109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:47:22.032386 containerd[1891]: time="2026-03-07T00:47:22.032361591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 7 00:47:22.035514 containerd[1891]: time="2026-03-07T00:47:22.035492498Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:47:22.039409 containerd[1891]: time="2026-03-07T00:47:22.039386164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:47:22.039795 containerd[1891]: time="2026-03-07T00:47:22.039768168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 603.634135ms" Mar 7 00:47:22.039851 containerd[1891]: time="2026-03-07T00:47:22.039797721Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 7 00:47:22.040497 containerd[1891]: time="2026-03-07T00:47:22.040471622Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 00:47:22.724608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589759707.mount: Deactivated successfully. Mar 7 00:47:23.628113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 00:47:23.630145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:47:23.642796 containerd[1891]: time="2026-03-07T00:47:23.642751262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:23.647714 containerd[1891]: time="2026-03-07T00:47:23.647493723Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 7 00:47:23.650732 containerd[1891]: time="2026-03-07T00:47:23.650635262Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:23.660841 containerd[1891]: time="2026-03-07T00:47:23.660803846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:23.662043 containerd[1891]: time="2026-03-07T00:47:23.662018364Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.621520252s" Mar 7 00:47:23.662150 containerd[1891]: time="2026-03-07T00:47:23.662136303Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 7 00:47:23.772595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:47:23.775482 (kubelet)[2811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:47:23.867580 kubelet[2811]: E0307 00:47:23.867536 2811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:47:23.871387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:47:23.871494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:47:23.871825 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105.4M memory peak. Mar 7 00:47:24.544976 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 7 00:47:26.686964 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:47:26.687186 systemd[1]: kubelet.service: Consumed 105ms CPU time, 105.4M memory peak. Mar 7 00:47:26.689088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:47:26.718389 systemd[1]: Reload requested from client PID 2839 ('systemctl') (unit session-9.scope)... Mar 7 00:47:26.718523 systemd[1]: Reloading... Mar 7 00:47:26.787113 update_engine[1878]: I20260307 00:47:26.786710 1878 update_attempter.cc:509] Updating boot flags... Mar 7 00:47:26.803737 zram_generator::config[2882]: No configuration found. Mar 7 00:47:26.990619 systemd[1]: Reloading finished in 271 ms. Mar 7 00:47:27.034617 systemd-logind[1870]: Removed session 4. Mar 7 00:47:27.064197 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 00:47:27.064286 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 00:47:27.064561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:47:27.069156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:47:27.558951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:47:27.573904 (kubelet)[3118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:47:27.597300 kubelet[3118]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:47:27.597300 kubelet[3118]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:47:27.597300 kubelet[3118]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:47:27.597621 kubelet[3118]: I0307 00:47:27.597332 3118 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:47:27.723587 kubelet[3118]: I0307 00:47:27.723549 3118 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:47:27.724126 kubelet[3118]: I0307 00:47:27.723949 3118 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:47:27.724194 kubelet[3118]: I0307 00:47:27.724176 3118 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:47:27.740518 kubelet[3118]: E0307 00:47:27.740485 3118 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 00:47:27.741334 kubelet[3118]: I0307 00:47:27.741229 3118 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:47:27.747310 kubelet[3118]: I0307 00:47:27.747289 3118 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 7 00:47:27.751436 kubelet[3118]: I0307 00:47:27.751416 3118 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:47:27.752648 kubelet[3118]: I0307 00:47:27.752317 3118 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:47:27.752648 kubelet[3118]: I0307 00:47:27.752346 3118 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-9877c76adf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:47:27.752648 kubelet[3118]: I0307 00:47:27.752479 3118 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:47:27.752648 kubelet[3118]: I0307 00:47:27.752485 3118 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:47:27.753196 kubelet[3118]: I0307 00:47:27.753177 3118 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:47:27.755581 kubelet[3118]: I0307 00:47:27.755563 3118 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:47:27.755691 kubelet[3118]: I0307 00:47:27.755678 3118 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:47:27.757024 kubelet[3118]: I0307 00:47:27.757003 3118 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:47:27.758170 kubelet[3118]: I0307 00:47:27.758155 3118 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:47:27.761421 kubelet[3118]: E0307 00:47:27.761394 3118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-9877c76adf&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:47:27.762393 kubelet[3118]: E0307 00:47:27.762366 3118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:47:27.762678 kubelet[3118]: I0307 00:47:27.762649 3118 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 7 00:47:27.763220 kubelet[3118]: I0307 00:47:27.763204 3118 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:47:27.763349 kubelet[3118]: W0307 00:47:27.763338 3118 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 00:47:27.768511 kubelet[3118]: I0307 00:47:27.768467 3118 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:47:27.768616 kubelet[3118]: I0307 00:47:27.768608 3118 server.go:1289] "Started kubelet" Mar 7 00:47:27.770850 kubelet[3118]: I0307 00:47:27.770833 3118 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:47:27.773158 kubelet[3118]: E0307 00:47:27.771567 3118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.3-n-9877c76adf.189a68a1df44ece4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.3-n-9877c76adf,UID:ci-4459.2.3-n-9877c76adf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.3-n-9877c76adf,},FirstTimestamp:2026-03-07 00:47:27.768571108 +0000 UTC m=+0.191568647,LastTimestamp:2026-03-07 00:47:27.768571108 +0000 UTC m=+0.191568647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.3-n-9877c76adf,}" Mar 7 00:47:27.773632 kubelet[3118]: I0307 00:47:27.773599 3118 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:47:27.774502 kubelet[3118]: I0307 00:47:27.774487 3118 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:47:27.777084 kubelet[3118]: I0307 00:47:27.777040 3118 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:47:27.777324 kubelet[3118]: I0307 00:47:27.777301 3118 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:47:27.777867 kubelet[3118]: I0307 00:47:27.777420 3118 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:47:27.778117 kubelet[3118]: I0307 00:47:27.778101 3118 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:47:27.779083 kubelet[3118]: I0307 00:47:27.777436 3118 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:47:27.779184 kubelet[3118]: I0307 00:47:27.779174 3118 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:47:27.779237 kubelet[3118]: E0307 00:47:27.777535 3118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:27.779788 kubelet[3118]: E0307 00:47:27.779753 3118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-9877c76adf?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="200ms" Mar 7 00:47:27.780101 kubelet[3118]: I0307 00:47:27.780076 3118 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:47:27.780917 kubelet[3118]: E0307 00:47:27.780897 3118 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:47:27.781951 kubelet[3118]: E0307 00:47:27.781936 3118 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:47:27.782165 kubelet[3118]: I0307 00:47:27.782150 3118 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:47:27.782304 kubelet[3118]: I0307 00:47:27.782226 3118 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:47:27.799494 kubelet[3118]: I0307 00:47:27.799470 3118 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:47:27.799815 kubelet[3118]: I0307 00:47:27.799586 3118 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:47:27.799815 kubelet[3118]: I0307 00:47:27.799605 3118 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:47:27.880320 kubelet[3118]: E0307 00:47:27.880291 3118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:27.906584 kubelet[3118]: I0307 00:47:27.906323 3118 policy_none.go:49] "None policy: Start" Mar 7 00:47:27.906584 kubelet[3118]: I0307 00:47:27.906365 3118 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:47:27.906584 kubelet[3118]: I0307 00:47:27.906377 3118 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:47:27.916053 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 00:47:27.920213 kubelet[3118]: I0307 00:47:27.920173 3118 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:47:27.922023 kubelet[3118]: I0307 00:47:27.921598 3118 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:47:27.922023 kubelet[3118]: I0307 00:47:27.921627 3118 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:47:27.922023 kubelet[3118]: I0307 00:47:27.921649 3118 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:47:27.922023 kubelet[3118]: I0307 00:47:27.921693 3118 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:47:27.922023 kubelet[3118]: E0307 00:47:27.921729 3118 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:47:27.924650 kubelet[3118]: E0307 00:47:27.924617 3118 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:47:27.927488 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 00:47:27.930226 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 00:47:27.940691 kubelet[3118]: E0307 00:47:27.940304 3118 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:47:27.940691 kubelet[3118]: I0307 00:47:27.940482 3118 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:47:27.940691 kubelet[3118]: I0307 00:47:27.940493 3118 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:47:27.941247 kubelet[3118]: I0307 00:47:27.940964 3118 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:47:27.942329 kubelet[3118]: E0307 00:47:27.941749 3118 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:47:27.942329 kubelet[3118]: E0307 00:47:27.941784 3118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:27.981089 kubelet[3118]: E0307 00:47:27.981047 3118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-9877c76adf?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="400ms" Mar 7 00:47:28.034828 systemd[1]: Created slice kubepods-burstable-pod99556369e275c96a560b38aff9bea87a.slice - libcontainer container kubepods-burstable-pod99556369e275c96a560b38aff9bea87a.slice. Mar 7 00:47:28.043489 kubelet[3118]: I0307 00:47:28.043467 3118 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.044088 kubelet[3118]: E0307 00:47:28.044018 3118 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.047262 kubelet[3118]: E0307 00:47:28.047205 3118 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-9877c76adf\" not found" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.050500 systemd[1]: Created slice kubepods-burstable-pod52843f9a417ea1396fe0379fecb4f9ab.slice - libcontainer container kubepods-burstable-pod52843f9a417ea1396fe0379fecb4f9ab.slice. Mar 7 00:47:28.057446 kubelet[3118]: E0307 00:47:28.057425 3118 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-9877c76adf\" not found" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.059303 systemd[1]: Created slice kubepods-burstable-pode1259f9389dd99002af1957a3749ce37.slice - libcontainer container kubepods-burstable-pode1259f9389dd99002af1957a3749ce37.slice. Mar 7 00:47:28.062892 kubelet[3118]: E0307 00:47:28.062805 3118 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-9877c76adf\" not found" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.080140 kubelet[3118]: I0307 00:47:28.080088 3118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99556369e275c96a560b38aff9bea87a-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-9877c76adf\" (UID: \"99556369e275c96a560b38aff9bea87a\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.080140 kubelet[3118]: I0307 00:47:28.080134 3118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99556369e275c96a560b38aff9bea87a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-9877c76adf\" (UID: \"99556369e275c96a560b38aff9bea87a\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.080140 kubelet[3118]: I0307 00:47:28.080151 3118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52843f9a417ea1396fe0379fecb4f9ab-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-9877c76adf\" (UID: \"52843f9a417ea1396fe0379fecb4f9ab\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.080312 kubelet[3118]: I0307 00:47:28.080162 3118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/52843f9a417ea1396fe0379fecb4f9ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-9877c76adf\" (UID: \"52843f9a417ea1396fe0379fecb4f9ab\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.080312 kubelet[3118]: I0307 00:47:28.080171 3118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52843f9a417ea1396fe0379fecb4f9ab-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-9877c76adf\" (UID: \"52843f9a417ea1396fe0379fecb4f9ab\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.080312 kubelet[3118]: I0307 00:47:28.080182 3118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52843f9a417ea1396fe0379fecb4f9ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-9877c76adf\" (UID: \"52843f9a417ea1396fe0379fecb4f9ab\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.080312 kubelet[3118]: I0307 00:47:28.080193 3118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1259f9389dd99002af1957a3749ce37-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-9877c76adf\" (UID: \"e1259f9389dd99002af1957a3749ce37\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.080312 kubelet[3118]: I0307 00:47:28.080202 3118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99556369e275c96a560b38aff9bea87a-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-9877c76adf\" (UID: \"99556369e275c96a560b38aff9bea87a\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.080387 kubelet[3118]: I0307 00:47:28.080210 3118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52843f9a417ea1396fe0379fecb4f9ab-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-9877c76adf\" (UID: \"52843f9a417ea1396fe0379fecb4f9ab\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.245908 kubelet[3118]: I0307 00:47:28.245821 3118 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.246605 kubelet[3118]: E0307 00:47:28.246580 3118 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.349084 containerd[1891]: time="2026-03-07T00:47:28.349040401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-9877c76adf,Uid:99556369e275c96a560b38aff9bea87a,Namespace:kube-system,Attempt:0,}" Mar 7 00:47:28.358625 containerd[1891]: time="2026-03-07T00:47:28.358594318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-9877c76adf,Uid:52843f9a417ea1396fe0379fecb4f9ab,Namespace:kube-system,Attempt:0,}" Mar 7 00:47:28.363454 containerd[1891]: time="2026-03-07T00:47:28.363343148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-9877c76adf,Uid:e1259f9389dd99002af1957a3749ce37,Namespace:kube-system,Attempt:0,}" Mar 7 00:47:28.382336 kubelet[3118]: E0307 00:47:28.382295 3118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-9877c76adf?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="800ms" Mar 7 00:47:28.403388 containerd[1891]: time="2026-03-07T00:47:28.402824405Z" level=info msg="connecting to shim 79af3b6afc2addcf70b7474a422ff8dfcf1d8288fe582ce61a8a6e216b650e3f" address="unix:///run/containerd/s/a831b0684beb4a284afad3a968c665b8fd45c09b8a8029b04af9eb49b311b9b9" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:47:28.425809 systemd[1]: Started cri-containerd-79af3b6afc2addcf70b7474a422ff8dfcf1d8288fe582ce61a8a6e216b650e3f.scope - libcontainer container 79af3b6afc2addcf70b7474a422ff8dfcf1d8288fe582ce61a8a6e216b650e3f. Mar 7 00:47:28.438200 containerd[1891]: time="2026-03-07T00:47:28.438143148Z" level=info msg="connecting to shim f225302393908e5344d31ed21909668f3733cbd3aeb95e6a871d9dc280b54169" address="unix:///run/containerd/s/e28bf834a0622613eb92b8e6cf88dad6ee34dfdfb96fa8beac7f4bf513458245" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:47:28.459927 systemd[1]: Started cri-containerd-f225302393908e5344d31ed21909668f3733cbd3aeb95e6a871d9dc280b54169.scope - libcontainer container f225302393908e5344d31ed21909668f3733cbd3aeb95e6a871d9dc280b54169. Mar 7 00:47:28.472023 containerd[1891]: time="2026-03-07T00:47:28.471977179Z" level=info msg="connecting to shim 2c8fe9d2adec09b2dc5d862cb078228c34c83a1dfd028201f9db8e90872db741" address="unix:///run/containerd/s/8f0e569cb9bfb421919d23265d5e39547c1e82e39b80517ac3488fb4a0dc6724" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:47:28.495488 containerd[1891]: time="2026-03-07T00:47:28.495332857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-9877c76adf,Uid:99556369e275c96a560b38aff9bea87a,Namespace:kube-system,Attempt:0,} returns sandbox id \"79af3b6afc2addcf70b7474a422ff8dfcf1d8288fe582ce61a8a6e216b650e3f\"" Mar 7 00:47:28.499958 systemd[1]: Started cri-containerd-2c8fe9d2adec09b2dc5d862cb078228c34c83a1dfd028201f9db8e90872db741.scope - libcontainer container 2c8fe9d2adec09b2dc5d862cb078228c34c83a1dfd028201f9db8e90872db741. Mar 7 00:47:28.503678 containerd[1891]: time="2026-03-07T00:47:28.503344773Z" level=info msg="CreateContainer within sandbox \"79af3b6afc2addcf70b7474a422ff8dfcf1d8288fe582ce61a8a6e216b650e3f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 00:47:28.518497 containerd[1891]: time="2026-03-07T00:47:28.518448616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-9877c76adf,Uid:52843f9a417ea1396fe0379fecb4f9ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"f225302393908e5344d31ed21909668f3733cbd3aeb95e6a871d9dc280b54169\"" Mar 7 00:47:28.529634 containerd[1891]: time="2026-03-07T00:47:28.529047957Z" level=info msg="CreateContainer within sandbox \"f225302393908e5344d31ed21909668f3733cbd3aeb95e6a871d9dc280b54169\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 00:47:28.533403 containerd[1891]: time="2026-03-07T00:47:28.533369965Z" level=info msg="Container 3dec91b6f465b1773a0253bc5306222f28755332d700afcc931954c1a9db2feb: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:47:28.558077 containerd[1891]: time="2026-03-07T00:47:28.558037172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-9877c76adf,Uid:e1259f9389dd99002af1957a3749ce37,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c8fe9d2adec09b2dc5d862cb078228c34c83a1dfd028201f9db8e90872db741\"" Mar 7 00:47:28.567684 containerd[1891]: time="2026-03-07T00:47:28.567469693Z" level=info msg="CreateContainer within sandbox \"2c8fe9d2adec09b2dc5d862cb078228c34c83a1dfd028201f9db8e90872db741\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 00:47:28.568881 containerd[1891]: time="2026-03-07T00:47:28.568857760Z" level=info msg="Container feaa3324ee13ae32c1168ab4c0abe5bb784964c8303585360913b0ab28db1480: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:47:28.580890 containerd[1891]: time="2026-03-07T00:47:28.580861842Z" level=info msg="CreateContainer within sandbox \"79af3b6afc2addcf70b7474a422ff8dfcf1d8288fe582ce61a8a6e216b650e3f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3dec91b6f465b1773a0253bc5306222f28755332d700afcc931954c1a9db2feb\"" Mar 7 00:47:28.582278 containerd[1891]: time="2026-03-07T00:47:28.581503366Z" level=info msg="StartContainer for \"3dec91b6f465b1773a0253bc5306222f28755332d700afcc931954c1a9db2feb\"" Mar 7 00:47:28.583068 containerd[1891]: time="2026-03-07T00:47:28.583036902Z" level=info msg="connecting to shim 3dec91b6f465b1773a0253bc5306222f28755332d700afcc931954c1a9db2feb" address="unix:///run/containerd/s/a831b0684beb4a284afad3a968c665b8fd45c09b8a8029b04af9eb49b311b9b9" protocol=ttrpc version=3 Mar 7 00:47:28.586326 containerd[1891]: time="2026-03-07T00:47:28.585937737Z" level=info msg="Container 42f11ead5248c9618aecd9a24689cd5055c091e56c0ec39f31ca38d018e8912c: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:47:28.595165 kubelet[3118]: E0307 00:47:28.595133 3118 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:47:28.595244 kubelet[3118]: E0307 00:47:28.595213 3118 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-9877c76adf&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:47:28.599797 systemd[1]: Started cri-containerd-3dec91b6f465b1773a0253bc5306222f28755332d700afcc931954c1a9db2feb.scope - libcontainer container 3dec91b6f465b1773a0253bc5306222f28755332d700afcc931954c1a9db2feb. Mar 7 00:47:28.603602 containerd[1891]: time="2026-03-07T00:47:28.603562763Z" level=info msg="CreateContainer within sandbox \"f225302393908e5344d31ed21909668f3733cbd3aeb95e6a871d9dc280b54169\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"feaa3324ee13ae32c1168ab4c0abe5bb784964c8303585360913b0ab28db1480\"" Mar 7 00:47:28.604176 containerd[1891]: time="2026-03-07T00:47:28.604153718Z" level=info msg="StartContainer for \"feaa3324ee13ae32c1168ab4c0abe5bb784964c8303585360913b0ab28db1480\"" Mar 7 00:47:28.605221 containerd[1891]: time="2026-03-07T00:47:28.605179358Z" level=info msg="connecting to shim feaa3324ee13ae32c1168ab4c0abe5bb784964c8303585360913b0ab28db1480" address="unix:///run/containerd/s/e28bf834a0622613eb92b8e6cf88dad6ee34dfdfb96fa8beac7f4bf513458245" protocol=ttrpc version=3 Mar 7 00:47:28.620892 containerd[1891]: time="2026-03-07T00:47:28.620785457Z" level=info msg="CreateContainer within sandbox \"2c8fe9d2adec09b2dc5d862cb078228c34c83a1dfd028201f9db8e90872db741\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"42f11ead5248c9618aecd9a24689cd5055c091e56c0ec39f31ca38d018e8912c\"" Mar 7 00:47:28.621431 containerd[1891]: time="2026-03-07T00:47:28.621382844Z" level=info msg="StartContainer for \"42f11ead5248c9618aecd9a24689cd5055c091e56c0ec39f31ca38d018e8912c\"" Mar 7 00:47:28.624028 containerd[1891]: time="2026-03-07T00:47:28.624006798Z" level=info msg="connecting to shim 42f11ead5248c9618aecd9a24689cd5055c091e56c0ec39f31ca38d018e8912c" address="unix:///run/containerd/s/8f0e569cb9bfb421919d23265d5e39547c1e82e39b80517ac3488fb4a0dc6724" protocol=ttrpc version=3 Mar 7 00:47:28.624843 systemd[1]: Started cri-containerd-feaa3324ee13ae32c1168ab4c0abe5bb784964c8303585360913b0ab28db1480.scope - libcontainer container feaa3324ee13ae32c1168ab4c0abe5bb784964c8303585360913b0ab28db1480. Mar 7 00:47:28.649649 kubelet[3118]: I0307 00:47:28.649604 3118 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.650212 kubelet[3118]: E0307 00:47:28.649935 3118 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.651271 systemd[1]: Started cri-containerd-42f11ead5248c9618aecd9a24689cd5055c091e56c0ec39f31ca38d018e8912c.scope - libcontainer container 42f11ead5248c9618aecd9a24689cd5055c091e56c0ec39f31ca38d018e8912c. Mar 7 00:47:28.651938 containerd[1891]: time="2026-03-07T00:47:28.651859610Z" level=info msg="StartContainer for \"3dec91b6f465b1773a0253bc5306222f28755332d700afcc931954c1a9db2feb\" returns successfully" Mar 7 00:47:28.676737 containerd[1891]: time="2026-03-07T00:47:28.676648549Z" level=info msg="StartContainer for \"feaa3324ee13ae32c1168ab4c0abe5bb784964c8303585360913b0ab28db1480\" returns successfully" Mar 7 00:47:28.711283 containerd[1891]: time="2026-03-07T00:47:28.711241588Z" level=info msg="StartContainer for \"42f11ead5248c9618aecd9a24689cd5055c091e56c0ec39f31ca38d018e8912c\" returns successfully" Mar 7 00:47:28.933791 kubelet[3118]: E0307 00:47:28.933760 3118 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-9877c76adf\" not found" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.934970 kubelet[3118]: E0307 00:47:28.934949 3118 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-9877c76adf\" not found" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:28.937835 kubelet[3118]: E0307 00:47:28.937812 3118 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-9877c76adf\" not found" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:29.452070 kubelet[3118]: I0307 00:47:29.451794 3118 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:29.935774 kubelet[3118]: E0307 00:47:29.935731 3118 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.3-n-9877c76adf\" not found" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:29.940829 kubelet[3118]: E0307 00:47:29.940770 3118 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-9877c76adf\" not found" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:29.941744 kubelet[3118]: E0307 00:47:29.941729 3118 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-9877c76adf\" not found" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:29.976115 kubelet[3118]: I0307 00:47:29.976077 3118 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:29.976115 kubelet[3118]: E0307 00:47:29.976109 3118 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.3-n-9877c76adf\": node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:30.039343 kubelet[3118]: E0307 00:47:30.039296 3118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:30.139945 kubelet[3118]: E0307 00:47:30.139903 3118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:30.240779 kubelet[3118]: E0307 00:47:30.240666 3118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:30.341522 kubelet[3118]: E0307 00:47:30.341479 3118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:30.442227 kubelet[3118]: E0307 00:47:30.442185 3118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:30.542841 kubelet[3118]: E0307 00:47:30.542715 3118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:30.643269 kubelet[3118]: E0307 00:47:30.643228 3118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:30.744300 kubelet[3118]: E0307 00:47:30.744262 3118 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:30.778182 kubelet[3118]: I0307 00:47:30.777878 3118 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:30.781732 kubelet[3118]: E0307 00:47:30.781701 3118 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-9877c76adf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:30.781732 kubelet[3118]: I0307 00:47:30.781727 3118 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:30.783143 kubelet[3118]: E0307 00:47:30.783106 3118 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-n-9877c76adf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:30.783143 kubelet[3118]: I0307 00:47:30.783131 3118 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:30.784475 kubelet[3118]: E0307 00:47:30.784453 3118 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-n-9877c76adf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:31.764506 kubelet[3118]: I0307 00:47:31.764462 3118 apiserver.go:52] "Watching apiserver" Mar 7 00:47:31.780232 kubelet[3118]: I0307 00:47:31.780199 3118 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:47:31.999359 systemd[1]: Reload requested from client PID 3400 ('systemctl') (unit session-9.scope)... Mar 7 00:47:31.999375 systemd[1]: Reloading... Mar 7 00:47:32.075841 zram_generator::config[3450]: No configuration found. Mar 7 00:47:32.242179 systemd[1]: Reloading finished in 242 ms. Mar 7 00:47:32.277224 kubelet[3118]: I0307 00:47:32.277003 3118 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:47:32.277319 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:47:32.291504 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 00:47:32.291773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:47:32.291846 systemd[1]: kubelet.service: Consumed 434ms CPU time, 126M memory peak. Mar 7 00:47:32.293543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:47:32.824782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:47:32.829920 (kubelet)[3511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:47:32.861204 kubelet[3511]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:47:32.861204 kubelet[3511]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:47:32.861204 kubelet[3511]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:47:32.861204 kubelet[3511]: I0307 00:47:32.861196 3511 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:47:32.868640 kubelet[3511]: I0307 00:47:32.868582 3511 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:47:32.868640 kubelet[3511]: I0307 00:47:32.868609 3511 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:47:32.872780 kubelet[3511]: I0307 00:47:32.869770 3511 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:47:32.872780 kubelet[3511]: I0307 00:47:32.871064 3511 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 00:47:32.873403 kubelet[3511]: I0307 00:47:32.873381 3511 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:47:32.877833 kubelet[3511]: I0307 00:47:32.877813 3511 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 7 00:47:32.882057 kubelet[3511]: I0307 00:47:32.882015 3511 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:47:32.882427 kubelet[3511]: I0307 00:47:32.882393 3511 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:47:32.882548 kubelet[3511]: I0307 00:47:32.882422 3511 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-9877c76adf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:47:32.882624 kubelet[3511]: I0307 00:47:32.882608 3511 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:47:32.882624 kubelet[3511]: I0307 00:47:32.882620 3511 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:47:32.882683 kubelet[3511]: I0307 00:47:32.882670 3511 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:47:32.882829 kubelet[3511]: I0307 00:47:32.882813 3511 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:47:32.882857 kubelet[3511]: I0307 00:47:32.882831 3511 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:47:32.883269 kubelet[3511]: I0307 00:47:32.882855 3511 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:47:32.883719 kubelet[3511]: I0307 00:47:32.883701 3511 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:47:32.886765 kubelet[3511]: I0307 00:47:32.886743 3511 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 7 00:47:32.887133 kubelet[3511]: I0307 00:47:32.887105 3511 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:47:32.890703 kubelet[3511]: I0307 00:47:32.890682 3511 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:47:32.890761 kubelet[3511]: I0307 00:47:32.890716 3511 server.go:1289] "Started kubelet" Mar 7 00:47:32.892666 kubelet[3511]: I0307 00:47:32.892263 3511 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:47:32.902700 kubelet[3511]: I0307 00:47:32.901220 3511 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:47:32.902700 kubelet[3511]: I0307 00:47:32.902297 3511 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:47:32.903360 kubelet[3511]: I0307 00:47:32.903314 3511 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:47:32.904850 kubelet[3511]: I0307 00:47:32.904830 3511 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:47:32.905015 kubelet[3511]: I0307 00:47:32.904999 3511 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:47:32.905098 kubelet[3511]: E0307 00:47:32.905015 3511 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-9877c76adf\" not found" Mar 7 00:47:32.905292 kubelet[3511]: I0307 00:47:32.905277 3511 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:47:32.907671 kubelet[3511]: I0307 00:47:32.906129 3511 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:47:32.907671 kubelet[3511]: I0307 00:47:32.906237 3511 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:47:32.907671 kubelet[3511]: I0307 00:47:32.907490 3511 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:47:32.916882 kubelet[3511]: I0307 00:47:32.916858 3511 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:47:32.916980 kubelet[3511]: I0307 00:47:32.916973 3511 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:47:32.917037 kubelet[3511]: I0307 00:47:32.917029 3511 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:47:32.917078 kubelet[3511]: I0307 00:47:32.917071 3511 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:47:32.917162 kubelet[3511]: E0307 00:47:32.917138 3511 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:47:32.929893 kubelet[3511]: I0307 00:47:32.929872 3511 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:47:32.930017 kubelet[3511]: I0307 00:47:32.930007 3511 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:47:32.930719 kubelet[3511]: I0307 00:47:32.930140 3511 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:47:32.970356 kubelet[3511]: I0307 00:47:32.970308 3511 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:47:32.970356 kubelet[3511]: I0307 00:47:32.970327 3511 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:47:32.970356 kubelet[3511]: I0307 00:47:32.970348 3511 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:47:32.970564 kubelet[3511]: I0307 00:47:32.970471 3511 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 00:47:32.970564 kubelet[3511]: I0307 00:47:32.970478 3511 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 00:47:32.970564 kubelet[3511]: I0307 00:47:32.970493 3511 policy_none.go:49] "None policy: Start" Mar 7 00:47:32.970564 kubelet[3511]: I0307 00:47:32.970500 3511 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:47:32.970564 kubelet[3511]: I0307 00:47:32.970507 3511 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:47:32.970715 kubelet[3511]: I0307 00:47:32.970587 3511 state_mem.go:75] "Updated machine memory state" Mar 7 00:47:32.975876 kubelet[3511]: E0307 00:47:32.975846 3511 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:47:32.976037 kubelet[3511]: I0307 00:47:32.976021 3511 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:47:32.976742 kubelet[3511]: I0307 00:47:32.976036 3511 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:47:32.976742 kubelet[3511]: I0307 00:47:32.976479 3511 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:47:32.980834 kubelet[3511]: E0307 00:47:32.980814 3511 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:47:33.018618 kubelet[3511]: I0307 00:47:33.018579 3511 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.019005 kubelet[3511]: I0307 00:47:33.018584 3511 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.019005 kubelet[3511]: I0307 00:47:33.018928 3511 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.028647 kubelet[3511]: I0307 00:47:33.028616 3511 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:47:33.034855 kubelet[3511]: I0307 00:47:33.034824 3511 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:47:33.034952 kubelet[3511]: I0307 00:47:33.034873 3511 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:47:33.078034 kubelet[3511]: I0307 00:47:33.077931 3511 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.089998 kubelet[3511]: I0307 00:47:33.089955 3511 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.090269 kubelet[3511]: I0307 00:47:33.090171 3511 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.107050 kubelet[3511]: I0307 00:47:33.107022 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99556369e275c96a560b38aff9bea87a-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-9877c76adf\" (UID: \"99556369e275c96a560b38aff9bea87a\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.107050 kubelet[3511]: I0307 00:47:33.107051 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52843f9a417ea1396fe0379fecb4f9ab-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-9877c76adf\" (UID: \"52843f9a417ea1396fe0379fecb4f9ab\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.107285 kubelet[3511]: I0307 00:47:33.107066 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/52843f9a417ea1396fe0379fecb4f9ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-9877c76adf\" (UID: \"52843f9a417ea1396fe0379fecb4f9ab\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.107285 kubelet[3511]: I0307 00:47:33.107079 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52843f9a417ea1396fe0379fecb4f9ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-9877c76adf\" (UID: \"52843f9a417ea1396fe0379fecb4f9ab\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.107285 kubelet[3511]: I0307 00:47:33.107093 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99556369e275c96a560b38aff9bea87a-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-9877c76adf\" (UID: \"99556369e275c96a560b38aff9bea87a\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.107285 kubelet[3511]: I0307 00:47:33.107104 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99556369e275c96a560b38aff9bea87a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-9877c76adf\" (UID: \"99556369e275c96a560b38aff9bea87a\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.107285 kubelet[3511]: I0307 00:47:33.107113 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52843f9a417ea1396fe0379fecb4f9ab-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-9877c76adf\" (UID: \"52843f9a417ea1396fe0379fecb4f9ab\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.107374 kubelet[3511]: I0307 00:47:33.107144 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52843f9a417ea1396fe0379fecb4f9ab-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-9877c76adf\" (UID: \"52843f9a417ea1396fe0379fecb4f9ab\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.107374 kubelet[3511]: I0307 00:47:33.107168 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1259f9389dd99002af1957a3749ce37-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-9877c76adf\" (UID: \"e1259f9389dd99002af1957a3749ce37\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.886637 kubelet[3511]: I0307 00:47:33.886096 3511 apiserver.go:52] "Watching apiserver" Mar 7 00:47:33.906562 kubelet[3511]: I0307 00:47:33.906517 3511 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:47:33.959871 kubelet[3511]: I0307 00:47:33.959838 3511 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.968314 kubelet[3511]: I0307 00:47:33.968106 3511 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 00:47:33.968314 kubelet[3511]: E0307 00:47:33.968159 3511 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-9877c76adf\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" Mar 7 00:47:33.978533 kubelet[3511]: I0307 00:47:33.978482 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-9877c76adf" podStartSLOduration=0.97846794 podStartE2EDuration="978.46794ms" podCreationTimestamp="2026-03-07 00:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:47:33.97829907 +0000 UTC m=+1.144811434" watchObservedRunningTime="2026-03-07 00:47:33.97846794 +0000 UTC m=+1.144980304" Mar 7 00:47:33.997985 kubelet[3511]: I0307 00:47:33.997927 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.3-n-9877c76adf" podStartSLOduration=0.997908871 podStartE2EDuration="997.908871ms" podCreationTimestamp="2026-03-07 00:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:47:33.989090344 +0000 UTC m=+1.155602708" watchObservedRunningTime="2026-03-07 00:47:33.997908871 +0000 UTC m=+1.164421235" Mar 7 00:47:38.622335 kubelet[3511]: I0307 00:47:38.622195 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.3-n-9877c76adf" podStartSLOduration=5.622181199 podStartE2EDuration="5.622181199s" podCreationTimestamp="2026-03-07 00:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:47:33.998549941 +0000 UTC m=+1.165062385" watchObservedRunningTime="2026-03-07 00:47:38.622181199 +0000 UTC m=+5.788693571" Mar 7 00:47:39.057799 kubelet[3511]: I0307 00:47:39.057720 3511 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 00:47:39.058399 containerd[1891]: time="2026-03-07T00:47:39.058337728Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 00:47:39.060320 kubelet[3511]: I0307 00:47:39.059808 3511 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 00:47:40.041500 systemd[1]: Created slice kubepods-besteffort-pod4ee97ee0_c9ae_4d3f_b073_3443993d6e41.slice - libcontainer container kubepods-besteffort-pod4ee97ee0_c9ae_4d3f_b073_3443993d6e41.slice. Mar 7 00:47:40.045684 kubelet[3511]: I0307 00:47:40.045446 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd2kl\" (UniqueName: \"kubernetes.io/projected/4ee97ee0-c9ae-4d3f-b073-3443993d6e41-kube-api-access-dd2kl\") pod \"kube-proxy-b78vs\" (UID: \"4ee97ee0-c9ae-4d3f-b073-3443993d6e41\") " pod="kube-system/kube-proxy-b78vs" Mar 7 00:47:40.045684 kubelet[3511]: I0307 00:47:40.045488 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4ee97ee0-c9ae-4d3f-b073-3443993d6e41-kube-proxy\") pod \"kube-proxy-b78vs\" (UID: \"4ee97ee0-c9ae-4d3f-b073-3443993d6e41\") " pod="kube-system/kube-proxy-b78vs" Mar 7 00:47:40.045684 kubelet[3511]: I0307 00:47:40.045502 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ee97ee0-c9ae-4d3f-b073-3443993d6e41-lib-modules\") pod \"kube-proxy-b78vs\" (UID: \"4ee97ee0-c9ae-4d3f-b073-3443993d6e41\") " pod="kube-system/kube-proxy-b78vs" Mar 7 00:47:40.045684 kubelet[3511]: I0307 00:47:40.045525 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ee97ee0-c9ae-4d3f-b073-3443993d6e41-xtables-lock\") pod \"kube-proxy-b78vs\" (UID: \"4ee97ee0-c9ae-4d3f-b073-3443993d6e41\") " pod="kube-system/kube-proxy-b78vs" Mar 7 00:47:40.284855 systemd[1]: Created slice kubepods-besteffort-pod9bcdf96e_bb5a_4c8b_b0b1_15d6a02d8db0.slice - libcontainer container kubepods-besteffort-pod9bcdf96e_bb5a_4c8b_b0b1_15d6a02d8db0.slice. Mar 7 00:47:40.347422 kubelet[3511]: I0307 00:47:40.347277 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9bcdf96e-bb5a-4c8b-b0b1-15d6a02d8db0-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-gt97s\" (UID: \"9bcdf96e-bb5a-4c8b-b0b1-15d6a02d8db0\") " pod="tigera-operator/tigera-operator-6bf85f8dd-gt97s" Mar 7 00:47:40.347422 kubelet[3511]: I0307 00:47:40.347331 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzs5w\" (UniqueName: \"kubernetes.io/projected/9bcdf96e-bb5a-4c8b-b0b1-15d6a02d8db0-kube-api-access-vzs5w\") pod \"tigera-operator-6bf85f8dd-gt97s\" (UID: \"9bcdf96e-bb5a-4c8b-b0b1-15d6a02d8db0\") " pod="tigera-operator/tigera-operator-6bf85f8dd-gt97s" Mar 7 00:47:40.351152 containerd[1891]: time="2026-03-07T00:47:40.351108948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b78vs,Uid:4ee97ee0-c9ae-4d3f-b073-3443993d6e41,Namespace:kube-system,Attempt:0,}" Mar 7 00:47:40.405284 containerd[1891]: time="2026-03-07T00:47:40.405227490Z" level=info msg="connecting to shim 87d9f7ad2a0e9ada8258c826b8577db84e4f2af16b03263e74afc301c53588ec" address="unix:///run/containerd/s/092aa5ac5304d2f28b21c9e6e43ea66e6a9eadfefd91ce4fcf574ed32df84abc" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:47:40.424804 systemd[1]: Started cri-containerd-87d9f7ad2a0e9ada8258c826b8577db84e4f2af16b03263e74afc301c53588ec.scope - libcontainer container 87d9f7ad2a0e9ada8258c826b8577db84e4f2af16b03263e74afc301c53588ec. Mar 7 00:47:40.451791 containerd[1891]: time="2026-03-07T00:47:40.451749251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b78vs,Uid:4ee97ee0-c9ae-4d3f-b073-3443993d6e41,Namespace:kube-system,Attempt:0,} returns sandbox id \"87d9f7ad2a0e9ada8258c826b8577db84e4f2af16b03263e74afc301c53588ec\"" Mar 7 00:47:40.461688 containerd[1891]: time="2026-03-07T00:47:40.461628529Z" level=info msg="CreateContainer within sandbox \"87d9f7ad2a0e9ada8258c826b8577db84e4f2af16b03263e74afc301c53588ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 00:47:40.483875 containerd[1891]: time="2026-03-07T00:47:40.483573579Z" level=info msg="Container 7026675c2e2586229245942c6a887d846ecef822a88cf1cfefd4a78007da33e2: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:47:40.500985 containerd[1891]: time="2026-03-07T00:47:40.500935729Z" level=info msg="CreateContainer within sandbox \"87d9f7ad2a0e9ada8258c826b8577db84e4f2af16b03263e74afc301c53588ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7026675c2e2586229245942c6a887d846ecef822a88cf1cfefd4a78007da33e2\"" Mar 7 00:47:40.502710 containerd[1891]: time="2026-03-07T00:47:40.501812998Z" level=info msg="StartContainer for \"7026675c2e2586229245942c6a887d846ecef822a88cf1cfefd4a78007da33e2\"" Mar 7 00:47:40.503087 containerd[1891]: time="2026-03-07T00:47:40.503057982Z" level=info msg="connecting to shim 7026675c2e2586229245942c6a887d846ecef822a88cf1cfefd4a78007da33e2" address="unix:///run/containerd/s/092aa5ac5304d2f28b21c9e6e43ea66e6a9eadfefd91ce4fcf574ed32df84abc" protocol=ttrpc version=3 Mar 7 00:47:40.517789 systemd[1]: Started cri-containerd-7026675c2e2586229245942c6a887d846ecef822a88cf1cfefd4a78007da33e2.scope - libcontainer container 7026675c2e2586229245942c6a887d846ecef822a88cf1cfefd4a78007da33e2. Mar 7 00:47:40.578269 containerd[1891]: time="2026-03-07T00:47:40.578230969Z" level=info msg="StartContainer for \"7026675c2e2586229245942c6a887d846ecef822a88cf1cfefd4a78007da33e2\" returns successfully" Mar 7 00:47:40.590302 containerd[1891]: time="2026-03-07T00:47:40.590250843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-gt97s,Uid:9bcdf96e-bb5a-4c8b-b0b1-15d6a02d8db0,Namespace:tigera-operator,Attempt:0,}" Mar 7 00:47:40.641740 containerd[1891]: time="2026-03-07T00:47:40.641686474Z" level=info msg="connecting to shim 18b4a83354f3d94c8271d4ffe4a946178579accf440f5f67d2a599872143260b" address="unix:///run/containerd/s/25f3f78620ad381af919249597b0c36cbdebcaeb9aed88d8bafcebab54a87f34" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:47:40.662802 systemd[1]: Started cri-containerd-18b4a83354f3d94c8271d4ffe4a946178579accf440f5f67d2a599872143260b.scope - libcontainer container 18b4a83354f3d94c8271d4ffe4a946178579accf440f5f67d2a599872143260b. Mar 7 00:47:40.705221 containerd[1891]: time="2026-03-07T00:47:40.705148228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-gt97s,Uid:9bcdf96e-bb5a-4c8b-b0b1-15d6a02d8db0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"18b4a83354f3d94c8271d4ffe4a946178579accf440f5f67d2a599872143260b\"" Mar 7 00:47:40.708024 containerd[1891]: time="2026-03-07T00:47:40.707995872Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 00:47:42.382274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2101580336.mount: Deactivated successfully. Mar 7 00:47:42.704746 containerd[1891]: time="2026-03-07T00:47:42.704596762Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:42.707439 containerd[1891]: time="2026-03-07T00:47:42.707293697Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 7 00:47:42.710716 containerd[1891]: time="2026-03-07T00:47:42.710683934Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:42.714873 containerd[1891]: time="2026-03-07T00:47:42.714818075Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:42.715341 containerd[1891]: time="2026-03-07T00:47:42.715196535Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.007166318s" Mar 7 00:47:42.715341 containerd[1891]: time="2026-03-07T00:47:42.715226600Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 7 00:47:42.723833 containerd[1891]: time="2026-03-07T00:47:42.723787796Z" level=info msg="CreateContainer within sandbox \"18b4a83354f3d94c8271d4ffe4a946178579accf440f5f67d2a599872143260b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 00:47:42.742178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776624274.mount: Deactivated successfully. Mar 7 00:47:42.743103 containerd[1891]: time="2026-03-07T00:47:42.742642354Z" level=info msg="Container 05bced975212efd90ec27204e16b9fa43c949e0753973c3e49bd544a30a53a17: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:47:42.763395 containerd[1891]: time="2026-03-07T00:47:42.763354533Z" level=info msg="CreateContainer within sandbox \"18b4a83354f3d94c8271d4ffe4a946178579accf440f5f67d2a599872143260b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"05bced975212efd90ec27204e16b9fa43c949e0753973c3e49bd544a30a53a17\"" Mar 7 00:47:42.764100 containerd[1891]: time="2026-03-07T00:47:42.764078996Z" level=info msg="StartContainer for \"05bced975212efd90ec27204e16b9fa43c949e0753973c3e49bd544a30a53a17\"" Mar 7 00:47:42.765646 containerd[1891]: time="2026-03-07T00:47:42.765617182Z" level=info msg="connecting to shim 05bced975212efd90ec27204e16b9fa43c949e0753973c3e49bd544a30a53a17" address="unix:///run/containerd/s/25f3f78620ad381af919249597b0c36cbdebcaeb9aed88d8bafcebab54a87f34" protocol=ttrpc version=3 Mar 7 00:47:42.787848 systemd[1]: Started cri-containerd-05bced975212efd90ec27204e16b9fa43c949e0753973c3e49bd544a30a53a17.scope - libcontainer container 05bced975212efd90ec27204e16b9fa43c949e0753973c3e49bd544a30a53a17. Mar 7 00:47:42.816509 containerd[1891]: time="2026-03-07T00:47:42.816403200Z" level=info msg="StartContainer for \"05bced975212efd90ec27204e16b9fa43c949e0753973c3e49bd544a30a53a17\" returns successfully" Mar 7 00:47:42.909590 kubelet[3511]: I0307 00:47:42.909387 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b78vs" podStartSLOduration=2.909369263 podStartE2EDuration="2.909369263s" podCreationTimestamp="2026-03-07 00:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:47:40.99916044 +0000 UTC m=+8.165672804" watchObservedRunningTime="2026-03-07 00:47:42.909369263 +0000 UTC m=+10.075881627" Mar 7 00:47:43.012605 kubelet[3511]: I0307 00:47:43.012396 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-gt97s" podStartSLOduration=1.002739141 podStartE2EDuration="3.012378082s" podCreationTimestamp="2026-03-07 00:47:40 +0000 UTC" firstStartedPulling="2026-03-07 00:47:40.706401301 +0000 UTC m=+7.872913665" lastFinishedPulling="2026-03-07 00:47:42.716040242 +0000 UTC m=+9.882552606" observedRunningTime="2026-03-07 00:47:42.997281708 +0000 UTC m=+10.163794120" watchObservedRunningTime="2026-03-07 00:47:43.012378082 +0000 UTC m=+10.178890558" Mar 7 00:47:47.907425 sudo[2362]: pam_unix(sudo:session): session closed for user root Mar 7 00:47:47.986006 sshd[2361]: Connection closed by 10.200.16.10 port 38746 Mar 7 00:47:47.985019 sshd-session[2358]: pam_unix(sshd:session): session closed for user core Mar 7 00:47:47.990390 systemd-logind[1870]: Session 9 logged out. Waiting for processes to exit. Mar 7 00:47:47.991011 systemd[1]: sshd@6-10.200.20.17:22-10.200.16.10:38746.service: Deactivated successfully. Mar 7 00:47:47.995157 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 00:47:47.995818 systemd[1]: session-9.scope: Consumed 4.116s CPU time, 223.4M memory peak. Mar 7 00:47:47.999081 systemd-logind[1870]: Removed session 9. Mar 7 00:47:50.730455 systemd[1]: Created slice kubepods-besteffort-pod73a07220_fa0f_401c_8414_ea1f73e911cf.slice - libcontainer container kubepods-besteffort-pod73a07220_fa0f_401c_8414_ea1f73e911cf.slice. Mar 7 00:47:50.810161 kubelet[3511]: I0307 00:47:50.810112 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/73a07220-fa0f-401c-8414-ea1f73e911cf-typha-certs\") pod \"calico-typha-647669788d-ghpbl\" (UID: \"73a07220-fa0f-401c-8414-ea1f73e911cf\") " pod="calico-system/calico-typha-647669788d-ghpbl" Mar 7 00:47:50.810161 kubelet[3511]: I0307 00:47:50.810158 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73a07220-fa0f-401c-8414-ea1f73e911cf-tigera-ca-bundle\") pod \"calico-typha-647669788d-ghpbl\" (UID: \"73a07220-fa0f-401c-8414-ea1f73e911cf\") " pod="calico-system/calico-typha-647669788d-ghpbl" Mar 7 00:47:50.811256 kubelet[3511]: I0307 00:47:50.810181 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6k6l\" (UniqueName: \"kubernetes.io/projected/73a07220-fa0f-401c-8414-ea1f73e911cf-kube-api-access-s6k6l\") pod \"calico-typha-647669788d-ghpbl\" (UID: \"73a07220-fa0f-401c-8414-ea1f73e911cf\") " pod="calico-system/calico-typha-647669788d-ghpbl" Mar 7 00:47:50.836017 systemd[1]: Created slice kubepods-besteffort-pod600bf431_de96_4560_80c7_8e75df20f03e.slice - libcontainer container kubepods-besteffort-pod600bf431_de96_4560_80c7_8e75df20f03e.slice. Mar 7 00:47:50.910391 kubelet[3511]: I0307 00:47:50.910340 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-policysync\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.910615 kubelet[3511]: I0307 00:47:50.910384 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/600bf431-de96-4560-80c7-8e75df20f03e-node-certs\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.910697 kubelet[3511]: I0307 00:47:50.910638 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-nodeproc\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911285 kubelet[3511]: I0307 00:47:50.911214 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-var-run-calico\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911285 kubelet[3511]: I0307 00:47:50.911284 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-lib-modules\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911812 kubelet[3511]: I0307 00:47:50.911322 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-cni-bin-dir\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911812 kubelet[3511]: I0307 00:47:50.911386 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-cni-net-dir\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911812 kubelet[3511]: I0307 00:47:50.911410 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/600bf431-de96-4560-80c7-8e75df20f03e-tigera-ca-bundle\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911812 kubelet[3511]: I0307 00:47:50.911422 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-var-lib-calico\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911812 kubelet[3511]: I0307 00:47:50.911436 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n892x\" (UniqueName: \"kubernetes.io/projected/600bf431-de96-4560-80c7-8e75df20f03e-kube-api-access-n892x\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911900 kubelet[3511]: I0307 00:47:50.911446 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-sys-fs\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911900 kubelet[3511]: I0307 00:47:50.911455 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-flexvol-driver-host\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911900 kubelet[3511]: I0307 00:47:50.911467 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-xtables-lock\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911900 kubelet[3511]: I0307 00:47:50.911479 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-bpffs\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.911900 kubelet[3511]: I0307 00:47:50.911488 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/600bf431-de96-4560-80c7-8e75df20f03e-cni-log-dir\") pod \"calico-node-fsg7p\" (UID: \"600bf431-de96-4560-80c7-8e75df20f03e\") " pod="calico-system/calico-node-fsg7p" Mar 7 00:47:50.977374 kubelet[3511]: E0307 00:47:50.977327 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:47:51.012883 kubelet[3511]: I0307 00:47:51.012469 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1c239e1-f122-4450-a1a5-965f8a8b2b49-kubelet-dir\") pod \"csi-node-driver-7cs64\" (UID: \"b1c239e1-f122-4450-a1a5-965f8a8b2b49\") " pod="calico-system/csi-node-driver-7cs64" Mar 7 00:47:51.012883 kubelet[3511]: I0307 00:47:51.012533 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b1c239e1-f122-4450-a1a5-965f8a8b2b49-registration-dir\") pod \"csi-node-driver-7cs64\" (UID: \"b1c239e1-f122-4450-a1a5-965f8a8b2b49\") " pod="calico-system/csi-node-driver-7cs64" Mar 7 00:47:51.012883 kubelet[3511]: I0307 00:47:51.012549 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b1c239e1-f122-4450-a1a5-965f8a8b2b49-varrun\") pod \"csi-node-driver-7cs64\" (UID: \"b1c239e1-f122-4450-a1a5-965f8a8b2b49\") " pod="calico-system/csi-node-driver-7cs64" Mar 7 00:47:51.012883 kubelet[3511]: I0307 00:47:51.012597 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b1c239e1-f122-4450-a1a5-965f8a8b2b49-socket-dir\") pod \"csi-node-driver-7cs64\" (UID: \"b1c239e1-f122-4450-a1a5-965f8a8b2b49\") " pod="calico-system/csi-node-driver-7cs64" Mar 7 00:47:51.015168 kubelet[3511]: I0307 00:47:51.014236 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndd8b\" (UniqueName: \"kubernetes.io/projected/b1c239e1-f122-4450-a1a5-965f8a8b2b49-kube-api-access-ndd8b\") pod \"csi-node-driver-7cs64\" (UID: \"b1c239e1-f122-4450-a1a5-965f8a8b2b49\") " pod="calico-system/csi-node-driver-7cs64" Mar 7 00:47:51.023179 kubelet[3511]: E0307 00:47:51.023151 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.023357 kubelet[3511]: W0307 00:47:51.023338 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.023534 kubelet[3511]: E0307 00:47:51.023518 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.042153 containerd[1891]: time="2026-03-07T00:47:51.042039398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647669788d-ghpbl,Uid:73a07220-fa0f-401c-8414-ea1f73e911cf,Namespace:calico-system,Attempt:0,}" Mar 7 00:47:51.051074 kubelet[3511]: E0307 00:47:51.050801 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.051074 kubelet[3511]: W0307 00:47:51.050823 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.051074 kubelet[3511]: E0307 00:47:51.050842 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.084716 containerd[1891]: time="2026-03-07T00:47:51.084644079Z" level=info msg="connecting to shim b17b6a3f565942205ef9ffe5cc89209197888ccbd6f3a59b9ea1490ddb2fd4ad" address="unix:///run/containerd/s/3680067498969abdb9267da0cfba872fd7d1f32e5c17c54416e5c75da4499766" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:47:51.103902 systemd[1]: Started cri-containerd-b17b6a3f565942205ef9ffe5cc89209197888ccbd6f3a59b9ea1490ddb2fd4ad.scope - libcontainer container b17b6a3f565942205ef9ffe5cc89209197888ccbd6f3a59b9ea1490ddb2fd4ad. Mar 7 00:47:51.115781 kubelet[3511]: E0307 00:47:51.115744 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.116070 kubelet[3511]: W0307 00:47:51.115765 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.116070 kubelet[3511]: E0307 00:47:51.115942 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.116359 kubelet[3511]: E0307 00:47:51.116345 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.116478 kubelet[3511]: W0307 00:47:51.116405 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.116478 kubelet[3511]: E0307 00:47:51.116420 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.116784 kubelet[3511]: E0307 00:47:51.116749 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.116784 kubelet[3511]: W0307 00:47:51.116765 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.116784 kubelet[3511]: E0307 00:47:51.116774 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.117140 kubelet[3511]: E0307 00:47:51.117104 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.117140 kubelet[3511]: W0307 00:47:51.117120 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.117140 kubelet[3511]: E0307 00:47:51.117129 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.117461 kubelet[3511]: E0307 00:47:51.117443 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.117625 kubelet[3511]: W0307 00:47:51.117453 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.117625 kubelet[3511]: E0307 00:47:51.117533 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.118031 kubelet[3511]: E0307 00:47:51.118001 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.118031 kubelet[3511]: W0307 00:47:51.118012 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.118031 kubelet[3511]: E0307 00:47:51.118021 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.118375 kubelet[3511]: E0307 00:47:51.118343 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.118375 kubelet[3511]: W0307 00:47:51.118354 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.118375 kubelet[3511]: E0307 00:47:51.118363 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.118650 kubelet[3511]: E0307 00:47:51.118622 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.118650 kubelet[3511]: W0307 00:47:51.118633 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.118650 kubelet[3511]: E0307 00:47:51.118641 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.118984 kubelet[3511]: E0307 00:47:51.118953 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.118984 kubelet[3511]: W0307 00:47:51.118965 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.118984 kubelet[3511]: E0307 00:47:51.118974 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.119262 kubelet[3511]: E0307 00:47:51.119249 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.119347 kubelet[3511]: W0307 00:47:51.119336 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.119401 kubelet[3511]: E0307 00:47:51.119390 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.119639 kubelet[3511]: E0307 00:47:51.119610 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.119639 kubelet[3511]: W0307 00:47:51.119623 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.119639 kubelet[3511]: E0307 00:47:51.119630 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.120129 kubelet[3511]: E0307 00:47:51.120096 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.120129 kubelet[3511]: W0307 00:47:51.120108 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.120129 kubelet[3511]: E0307 00:47:51.120118 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.120441 kubelet[3511]: E0307 00:47:51.120414 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.120441 kubelet[3511]: W0307 00:47:51.120423 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.120441 kubelet[3511]: E0307 00:47:51.120432 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.121007 kubelet[3511]: E0307 00:47:51.120995 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.121146 kubelet[3511]: W0307 00:47:51.121054 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.121146 kubelet[3511]: E0307 00:47:51.121070 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.121883 kubelet[3511]: E0307 00:47:51.121777 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.121883 kubelet[3511]: W0307 00:47:51.121792 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.121883 kubelet[3511]: E0307 00:47:51.121802 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.122676 kubelet[3511]: E0307 00:47:51.122513 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.122676 kubelet[3511]: W0307 00:47:51.122644 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.123128 kubelet[3511]: E0307 00:47:51.122989 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.123692 kubelet[3511]: E0307 00:47:51.123628 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.123692 kubelet[3511]: W0307 00:47:51.123642 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.123998 kubelet[3511]: E0307 00:47:51.123862 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.125140 kubelet[3511]: E0307 00:47:51.125041 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.125253 kubelet[3511]: W0307 00:47:51.125215 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.125253 kubelet[3511]: E0307 00:47:51.125233 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.126728 kubelet[3511]: E0307 00:47:51.126711 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.126965 kubelet[3511]: W0307 00:47:51.126902 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.126965 kubelet[3511]: E0307 00:47:51.126920 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.127939 kubelet[3511]: E0307 00:47:51.127538 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.127939 kubelet[3511]: W0307 00:47:51.127671 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.127939 kubelet[3511]: E0307 00:47:51.127686 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.129812 kubelet[3511]: E0307 00:47:51.129797 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.130293 kubelet[3511]: W0307 00:47:51.129932 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.130293 kubelet[3511]: E0307 00:47:51.129946 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.130798 kubelet[3511]: E0307 00:47:51.130703 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.133443 kubelet[3511]: W0307 00:47:51.130956 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.133443 kubelet[3511]: E0307 00:47:51.130976 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.134121 kubelet[3511]: E0307 00:47:51.134020 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.134305 kubelet[3511]: W0307 00:47:51.134287 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.134539 kubelet[3511]: E0307 00:47:51.134472 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.135749 kubelet[3511]: E0307 00:47:51.135734 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.135845 kubelet[3511]: W0307 00:47:51.135832 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.136039 kubelet[3511]: E0307 00:47:51.136024 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.137158 kubelet[3511]: E0307 00:47:51.137143 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.137344 kubelet[3511]: W0307 00:47:51.137326 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.137565 kubelet[3511]: E0307 00:47:51.137509 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.141356 containerd[1891]: time="2026-03-07T00:47:51.140784607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fsg7p,Uid:600bf431-de96-4560-80c7-8e75df20f03e,Namespace:calico-system,Attempt:0,}" Mar 7 00:47:51.144242 containerd[1891]: time="2026-03-07T00:47:51.144134982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647669788d-ghpbl,Uid:73a07220-fa0f-401c-8414-ea1f73e911cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"b17b6a3f565942205ef9ffe5cc89209197888ccbd6f3a59b9ea1490ddb2fd4ad\"" Mar 7 00:47:51.148288 containerd[1891]: time="2026-03-07T00:47:51.148183428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 00:47:51.153516 kubelet[3511]: E0307 00:47:51.153383 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:51.153516 kubelet[3511]: W0307 00:47:51.153406 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:51.153516 kubelet[3511]: E0307 00:47:51.153424 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:51.206482 containerd[1891]: time="2026-03-07T00:47:51.206435962Z" level=info msg="connecting to shim 8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c" address="unix:///run/containerd/s/f49e92aef8042bd6f15b2430ac8a0e75a1e28b0c2f766be7e7870c2a87671cf6" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:47:51.221819 systemd[1]: Started cri-containerd-8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c.scope - libcontainer container 8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c. Mar 7 00:47:51.248118 containerd[1891]: time="2026-03-07T00:47:51.247998944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fsg7p,Uid:600bf431-de96-4560-80c7-8e75df20f03e,Namespace:calico-system,Attempt:0,} returns sandbox id \"8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c\"" Mar 7 00:47:52.502472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585049692.mount: Deactivated successfully. Mar 7 00:47:52.918380 kubelet[3511]: E0307 00:47:52.918330 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:47:53.553349 containerd[1891]: time="2026-03-07T00:47:53.552815758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:53.555215 containerd[1891]: time="2026-03-07T00:47:53.555186930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 7 00:47:53.558616 containerd[1891]: time="2026-03-07T00:47:53.558589737Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:53.562482 containerd[1891]: time="2026-03-07T00:47:53.562432053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:53.562937 containerd[1891]: time="2026-03-07T00:47:53.562791865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.414558899s" Mar 7 00:47:53.562937 containerd[1891]: time="2026-03-07T00:47:53.562821033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 7 00:47:53.564436 containerd[1891]: time="2026-03-07T00:47:53.564410077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 00:47:53.581828 containerd[1891]: time="2026-03-07T00:47:53.581752206Z" level=info msg="CreateContainer within sandbox \"b17b6a3f565942205ef9ffe5cc89209197888ccbd6f3a59b9ea1490ddb2fd4ad\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 00:47:53.606072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1754164300.mount: Deactivated successfully. Mar 7 00:47:53.608399 containerd[1891]: time="2026-03-07T00:47:53.607742215Z" level=info msg="Container 4a963c6af7cdeaf60a11b4606c5755efed781677ab49c5e3dbccbc926aac9ba4: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:47:53.623681 containerd[1891]: time="2026-03-07T00:47:53.623611584Z" level=info msg="CreateContainer within sandbox \"b17b6a3f565942205ef9ffe5cc89209197888ccbd6f3a59b9ea1490ddb2fd4ad\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4a963c6af7cdeaf60a11b4606c5755efed781677ab49c5e3dbccbc926aac9ba4\"" Mar 7 00:47:53.624821 containerd[1891]: time="2026-03-07T00:47:53.624784470Z" level=info msg="StartContainer for \"4a963c6af7cdeaf60a11b4606c5755efed781677ab49c5e3dbccbc926aac9ba4\"" Mar 7 00:47:53.626203 containerd[1891]: time="2026-03-07T00:47:53.626089088Z" level=info msg="connecting to shim 4a963c6af7cdeaf60a11b4606c5755efed781677ab49c5e3dbccbc926aac9ba4" address="unix:///run/containerd/s/3680067498969abdb9267da0cfba872fd7d1f32e5c17c54416e5c75da4499766" protocol=ttrpc version=3 Mar 7 00:47:53.642834 systemd[1]: Started cri-containerd-4a963c6af7cdeaf60a11b4606c5755efed781677ab49c5e3dbccbc926aac9ba4.scope - libcontainer container 4a963c6af7cdeaf60a11b4606c5755efed781677ab49c5e3dbccbc926aac9ba4. Mar 7 00:47:53.678728 containerd[1891]: time="2026-03-07T00:47:53.678527824Z" level=info msg="StartContainer for \"4a963c6af7cdeaf60a11b4606c5755efed781677ab49c5e3dbccbc926aac9ba4\" returns successfully" Mar 7 00:47:54.015472 kubelet[3511]: I0307 00:47:54.015382 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-647669788d-ghpbl" podStartSLOduration=1.598986427 podStartE2EDuration="4.015366361s" podCreationTimestamp="2026-03-07 00:47:50 +0000 UTC" firstStartedPulling="2026-03-07 00:47:51.147361529 +0000 UTC m=+18.313873893" lastFinishedPulling="2026-03-07 00:47:53.563741463 +0000 UTC m=+20.730253827" observedRunningTime="2026-03-07 00:47:54.015021134 +0000 UTC m=+21.181533562" watchObservedRunningTime="2026-03-07 00:47:54.015366361 +0000 UTC m=+21.181878741" Mar 7 00:47:54.018450 kubelet[3511]: E0307 00:47:54.018410 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.018617 kubelet[3511]: W0307 00:47:54.018434 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.018617 kubelet[3511]: E0307 00:47:54.018578 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.018869 kubelet[3511]: E0307 00:47:54.018830 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.018943 kubelet[3511]: W0307 00:47:54.018842 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.019006 kubelet[3511]: E0307 00:47:54.018996 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.019221 kubelet[3511]: E0307 00:47:54.019203 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.019319 kubelet[3511]: W0307 00:47:54.019213 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.019319 kubelet[3511]: E0307 00:47:54.019281 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.019576 kubelet[3511]: E0307 00:47:54.019530 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.019576 kubelet[3511]: W0307 00:47:54.019540 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.019576 kubelet[3511]: E0307 00:47:54.019551 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.019888 kubelet[3511]: E0307 00:47:54.019841 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.019888 kubelet[3511]: W0307 00:47:54.019853 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.019888 kubelet[3511]: E0307 00:47:54.019862 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.020130 kubelet[3511]: E0307 00:47:54.020109 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.020232 kubelet[3511]: W0307 00:47:54.020122 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.020232 kubelet[3511]: E0307 00:47:54.020201 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.020472 kubelet[3511]: E0307 00:47:54.020432 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.020472 kubelet[3511]: W0307 00:47:54.020442 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.020472 kubelet[3511]: E0307 00:47:54.020451 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.020720 kubelet[3511]: E0307 00:47:54.020681 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.020720 kubelet[3511]: W0307 00:47:54.020691 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.020720 kubelet[3511]: E0307 00:47:54.020700 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.021016 kubelet[3511]: E0307 00:47:54.020967 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.021016 kubelet[3511]: W0307 00:47:54.020977 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.021016 kubelet[3511]: E0307 00:47:54.020987 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.021287 kubelet[3511]: E0307 00:47:54.021242 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.021287 kubelet[3511]: W0307 00:47:54.021253 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.021287 kubelet[3511]: E0307 00:47:54.021262 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.021535 kubelet[3511]: E0307 00:47:54.021513 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.021535 kubelet[3511]: W0307 00:47:54.021523 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.021681 kubelet[3511]: E0307 00:47:54.021615 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.021876 kubelet[3511]: E0307 00:47:54.021865 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.022000 kubelet[3511]: W0307 00:47:54.021897 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.022000 kubelet[3511]: E0307 00:47:54.021908 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.022222 kubelet[3511]: E0307 00:47:54.022178 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.022222 kubelet[3511]: W0307 00:47:54.022189 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.022222 kubelet[3511]: E0307 00:47:54.022197 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.022484 kubelet[3511]: E0307 00:47:54.022433 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.022484 kubelet[3511]: W0307 00:47:54.022443 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.022484 kubelet[3511]: E0307 00:47:54.022452 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.022803 kubelet[3511]: E0307 00:47:54.022730 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.022803 kubelet[3511]: W0307 00:47:54.022741 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.022803 kubelet[3511]: E0307 00:47:54.022749 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.035247 kubelet[3511]: E0307 00:47:54.035208 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.035247 kubelet[3511]: W0307 00:47:54.035228 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.035396 kubelet[3511]: E0307 00:47:54.035384 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.035669 kubelet[3511]: E0307 00:47:54.035635 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.035669 kubelet[3511]: W0307 00:47:54.035647 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.035764 kubelet[3511]: E0307 00:47:54.035752 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.036007 kubelet[3511]: E0307 00:47:54.035995 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.036119 kubelet[3511]: W0307 00:47:54.036058 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.036119 kubelet[3511]: E0307 00:47:54.036070 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.036258 kubelet[3511]: E0307 00:47:54.036236 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.036291 kubelet[3511]: W0307 00:47:54.036256 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.036291 kubelet[3511]: E0307 00:47:54.036268 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.036487 kubelet[3511]: E0307 00:47:54.036472 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.036487 kubelet[3511]: W0307 00:47:54.036484 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.036560 kubelet[3511]: E0307 00:47:54.036493 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.036641 kubelet[3511]: E0307 00:47:54.036627 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.036641 kubelet[3511]: W0307 00:47:54.036637 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.036727 kubelet[3511]: E0307 00:47:54.036644 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.036804 kubelet[3511]: E0307 00:47:54.036791 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.036804 kubelet[3511]: W0307 00:47:54.036802 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.036849 kubelet[3511]: E0307 00:47:54.036809 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.037168 kubelet[3511]: E0307 00:47:54.037105 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.037168 kubelet[3511]: W0307 00:47:54.037117 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.037168 kubelet[3511]: E0307 00:47:54.037126 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.037455 kubelet[3511]: E0307 00:47:54.037390 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.037455 kubelet[3511]: W0307 00:47:54.037401 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.037455 kubelet[3511]: E0307 00:47:54.037408 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.037684 kubelet[3511]: E0307 00:47:54.037672 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.037829 kubelet[3511]: W0307 00:47:54.037732 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.037829 kubelet[3511]: E0307 00:47:54.037747 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.038032 kubelet[3511]: E0307 00:47:54.038022 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.038089 kubelet[3511]: W0307 00:47:54.038079 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.038135 kubelet[3511]: E0307 00:47:54.038124 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.038367 kubelet[3511]: E0307 00:47:54.038338 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.038367 kubelet[3511]: W0307 00:47:54.038348 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.038367 kubelet[3511]: E0307 00:47:54.038356 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.038674 kubelet[3511]: E0307 00:47:54.038631 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.038674 kubelet[3511]: W0307 00:47:54.038642 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.038674 kubelet[3511]: E0307 00:47:54.038650 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.039076 kubelet[3511]: E0307 00:47:54.038965 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.039076 kubelet[3511]: W0307 00:47:54.038976 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.039076 kubelet[3511]: E0307 00:47:54.038984 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.039500 kubelet[3511]: E0307 00:47:54.039487 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.039645 kubelet[3511]: W0307 00:47:54.039563 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.039645 kubelet[3511]: E0307 00:47:54.039578 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.039887 kubelet[3511]: E0307 00:47:54.039876 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.040142 kubelet[3511]: W0307 00:47:54.039943 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.040142 kubelet[3511]: E0307 00:47:54.039959 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.040210 kubelet[3511]: E0307 00:47:54.040149 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.040210 kubelet[3511]: W0307 00:47:54.040159 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.040210 kubelet[3511]: E0307 00:47:54.040169 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.040306 kubelet[3511]: E0307 00:47:54.040288 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:54.040306 kubelet[3511]: W0307 00:47:54.040296 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:54.040306 kubelet[3511]: E0307 00:47:54.040303 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:54.918224 kubelet[3511]: E0307 00:47:54.917866 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:47:55.004540 kubelet[3511]: I0307 00:47:55.004510 3511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:47:55.011559 containerd[1891]: time="2026-03-07T00:47:55.011012682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:55.014816 containerd[1891]: time="2026-03-07T00:47:55.014782500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 7 00:47:55.018159 containerd[1891]: time="2026-03-07T00:47:55.018127304Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:55.022744 containerd[1891]: time="2026-03-07T00:47:55.022693708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:47:55.023402 containerd[1891]: time="2026-03-07T00:47:55.023371778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.458936027s" Mar 7 00:47:55.023501 containerd[1891]: time="2026-03-07T00:47:55.023486637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 7 00:47:55.029511 kubelet[3511]: E0307 00:47:55.029479 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.030096 kubelet[3511]: W0307 00:47:55.029842 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.030096 kubelet[3511]: E0307 00:47:55.029874 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.030311 kubelet[3511]: E0307 00:47:55.030297 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.030461 kubelet[3511]: W0307 00:47:55.030342 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.030461 kubelet[3511]: E0307 00:47:55.030357 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.030745 kubelet[3511]: E0307 00:47:55.030673 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.030745 kubelet[3511]: W0307 00:47:55.030686 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.030745 kubelet[3511]: E0307 00:47:55.030696 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.031023 kubelet[3511]: E0307 00:47:55.030980 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.031023 kubelet[3511]: W0307 00:47:55.030992 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.031023 kubelet[3511]: E0307 00:47:55.031001 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.031722 kubelet[3511]: E0307 00:47:55.031681 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.031722 kubelet[3511]: W0307 00:47:55.031692 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.031722 kubelet[3511]: E0307 00:47:55.031702 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.032254 containerd[1891]: time="2026-03-07T00:47:55.032111164Z" level=info msg="CreateContainer within sandbox \"8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 00:47:55.032441 kubelet[3511]: E0307 00:47:55.032431 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.032509 kubelet[3511]: W0307 00:47:55.032498 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.033015 kubelet[3511]: E0307 00:47:55.032584 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.033372 kubelet[3511]: E0307 00:47:55.033301 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.033372 kubelet[3511]: W0307 00:47:55.033318 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.033372 kubelet[3511]: E0307 00:47:55.033329 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.033631 kubelet[3511]: E0307 00:47:55.033616 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.033723 kubelet[3511]: W0307 00:47:55.033710 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.033781 kubelet[3511]: E0307 00:47:55.033772 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.034182 kubelet[3511]: E0307 00:47:55.034168 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.034637 kubelet[3511]: W0307 00:47:55.034457 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.034637 kubelet[3511]: E0307 00:47:55.034476 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.034852 kubelet[3511]: E0307 00:47:55.034787 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.034852 kubelet[3511]: W0307 00:47:55.034800 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.034852 kubelet[3511]: E0307 00:47:55.034810 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.035138 kubelet[3511]: E0307 00:47:55.035074 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.035138 kubelet[3511]: W0307 00:47:55.035090 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.035138 kubelet[3511]: E0307 00:47:55.035098 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.035388 kubelet[3511]: E0307 00:47:55.035360 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.035388 kubelet[3511]: W0307 00:47:55.035370 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.035499 kubelet[3511]: E0307 00:47:55.035379 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.035801 kubelet[3511]: E0307 00:47:55.035748 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.035801 kubelet[3511]: W0307 00:47:55.035763 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.035801 kubelet[3511]: E0307 00:47:55.035772 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.036128 kubelet[3511]: E0307 00:47:55.036070 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.036128 kubelet[3511]: W0307 00:47:55.036082 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.036128 kubelet[3511]: E0307 00:47:55.036091 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.036386 kubelet[3511]: E0307 00:47:55.036351 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.036386 kubelet[3511]: W0307 00:47:55.036362 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.036386 kubelet[3511]: E0307 00:47:55.036370 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.043006 kubelet[3511]: E0307 00:47:55.042876 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.043006 kubelet[3511]: W0307 00:47:55.042899 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.043006 kubelet[3511]: E0307 00:47:55.042916 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.043309 kubelet[3511]: E0307 00:47:55.043275 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.043309 kubelet[3511]: W0307 00:47:55.043286 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.043491 kubelet[3511]: E0307 00:47:55.043296 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.043693 kubelet[3511]: E0307 00:47:55.043681 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.043770 kubelet[3511]: W0307 00:47:55.043743 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.043770 kubelet[3511]: E0307 00:47:55.043760 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.044082 kubelet[3511]: E0307 00:47:55.044051 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.044082 kubelet[3511]: W0307 00:47:55.044061 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.044082 kubelet[3511]: E0307 00:47:55.044070 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.044410 kubelet[3511]: E0307 00:47:55.044382 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.044410 kubelet[3511]: W0307 00:47:55.044392 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.044410 kubelet[3511]: E0307 00:47:55.044400 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.044757 kubelet[3511]: E0307 00:47:55.044724 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.044757 kubelet[3511]: W0307 00:47:55.044735 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.044757 kubelet[3511]: E0307 00:47:55.044744 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.045072 kubelet[3511]: E0307 00:47:55.045043 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.045072 kubelet[3511]: W0307 00:47:55.045053 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.045072 kubelet[3511]: E0307 00:47:55.045062 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.045370 kubelet[3511]: E0307 00:47:55.045342 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.045370 kubelet[3511]: W0307 00:47:55.045352 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.045370 kubelet[3511]: E0307 00:47:55.045360 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.046013 kubelet[3511]: E0307 00:47:55.045705 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.046013 kubelet[3511]: W0307 00:47:55.045716 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.046013 kubelet[3511]: E0307 00:47:55.045725 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.046167 kubelet[3511]: E0307 00:47:55.046155 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.046233 kubelet[3511]: W0307 00:47:55.046222 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.046290 kubelet[3511]: E0307 00:47:55.046281 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.046532 kubelet[3511]: E0307 00:47:55.046520 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.046600 kubelet[3511]: W0307 00:47:55.046590 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.046647 kubelet[3511]: E0307 00:47:55.046637 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.046911 kubelet[3511]: E0307 00:47:55.046882 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.046911 kubelet[3511]: W0307 00:47:55.046893 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.046911 kubelet[3511]: E0307 00:47:55.046901 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.047203 kubelet[3511]: E0307 00:47:55.047176 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.047203 kubelet[3511]: W0307 00:47:55.047186 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.047203 kubelet[3511]: E0307 00:47:55.047194 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.047478 kubelet[3511]: E0307 00:47:55.047446 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.047478 kubelet[3511]: W0307 00:47:55.047456 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.047478 kubelet[3511]: E0307 00:47:55.047469 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.047802 kubelet[3511]: E0307 00:47:55.047774 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.047802 kubelet[3511]: W0307 00:47:55.047784 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.047802 kubelet[3511]: E0307 00:47:55.047792 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.048166 kubelet[3511]: E0307 00:47:55.048123 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.048371 kubelet[3511]: W0307 00:47:55.048356 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.048435 kubelet[3511]: E0307 00:47:55.048423 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.048996 kubelet[3511]: E0307 00:47:55.048984 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.049198 kubelet[3511]: W0307 00:47:55.049094 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.049198 kubelet[3511]: E0307 00:47:55.049110 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.049406 kubelet[3511]: E0307 00:47:55.049397 3511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:47:55.049489 kubelet[3511]: W0307 00:47:55.049457 3511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:47:55.049489 kubelet[3511]: E0307 00:47:55.049471 3511 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:47:55.051715 containerd[1891]: time="2026-03-07T00:47:55.051637036Z" level=info msg="Container 3bb4baa4adb854559dc582474648f381dcdea282cf6216e0aa11459b39e4fd39: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:47:55.070597 containerd[1891]: time="2026-03-07T00:47:55.070548312Z" level=info msg="CreateContainer within sandbox \"8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3bb4baa4adb854559dc582474648f381dcdea282cf6216e0aa11459b39e4fd39\"" Mar 7 00:47:55.072005 containerd[1891]: time="2026-03-07T00:47:55.071976958Z" level=info msg="StartContainer for \"3bb4baa4adb854559dc582474648f381dcdea282cf6216e0aa11459b39e4fd39\"" Mar 7 00:47:55.073103 containerd[1891]: time="2026-03-07T00:47:55.073076233Z" level=info msg="connecting to shim 3bb4baa4adb854559dc582474648f381dcdea282cf6216e0aa11459b39e4fd39" address="unix:///run/containerd/s/f49e92aef8042bd6f15b2430ac8a0e75a1e28b0c2f766be7e7870c2a87671cf6" protocol=ttrpc version=3 Mar 7 00:47:55.093797 systemd[1]: Started cri-containerd-3bb4baa4adb854559dc582474648f381dcdea282cf6216e0aa11459b39e4fd39.scope - libcontainer container 3bb4baa4adb854559dc582474648f381dcdea282cf6216e0aa11459b39e4fd39. Mar 7 00:47:55.161204 containerd[1891]: time="2026-03-07T00:47:55.161099865Z" level=info msg="StartContainer for \"3bb4baa4adb854559dc582474648f381dcdea282cf6216e0aa11459b39e4fd39\" returns successfully" Mar 7 00:47:55.166907 systemd[1]: cri-containerd-3bb4baa4adb854559dc582474648f381dcdea282cf6216e0aa11459b39e4fd39.scope: Deactivated successfully. Mar 7 00:47:55.172110 containerd[1891]: time="2026-03-07T00:47:55.171856869Z" level=info msg="received container exit event container_id:\"3bb4baa4adb854559dc582474648f381dcdea282cf6216e0aa11459b39e4fd39\" id:\"3bb4baa4adb854559dc582474648f381dcdea282cf6216e0aa11459b39e4fd39\" pid:4162 exited_at:{seconds:1772844475 nanos:171385406}" Mar 7 00:47:55.191392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bb4baa4adb854559dc582474648f381dcdea282cf6216e0aa11459b39e4fd39-rootfs.mount: Deactivated successfully. Mar 7 00:47:56.918612 kubelet[3511]: E0307 00:47:56.918411 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:47:57.015690 containerd[1891]: time="2026-03-07T00:47:57.013241532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 00:47:58.917825 kubelet[3511]: E0307 00:47:58.917394 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:48:00.919432 kubelet[3511]: E0307 00:48:00.919260 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:48:02.918415 kubelet[3511]: E0307 00:48:02.918358 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:48:04.010547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2328209828.mount: Deactivated successfully. Mar 7 00:48:04.603403 containerd[1891]: time="2026-03-07T00:48:04.602930442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:04.606019 containerd[1891]: time="2026-03-07T00:48:04.605992621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 7 00:48:04.608634 containerd[1891]: time="2026-03-07T00:48:04.608591993Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:04.613682 containerd[1891]: time="2026-03-07T00:48:04.613570169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:04.613999 containerd[1891]: time="2026-03-07T00:48:04.613977094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 7.600700185s" Mar 7 00:48:04.614081 containerd[1891]: time="2026-03-07T00:48:04.614069185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 7 00:48:04.621501 containerd[1891]: time="2026-03-07T00:48:04.621477560Z" level=info msg="CreateContainer within sandbox \"8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 00:48:04.643796 containerd[1891]: time="2026-03-07T00:48:04.643761741Z" level=info msg="Container 2e92f4f0185e43a9c48c9c563e845a3b52ac9602aeecf80eb3ee79ce703b2948: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:04.661212 containerd[1891]: time="2026-03-07T00:48:04.661161885Z" level=info msg="CreateContainer within sandbox \"8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2e92f4f0185e43a9c48c9c563e845a3b52ac9602aeecf80eb3ee79ce703b2948\"" Mar 7 00:48:04.663690 containerd[1891]: time="2026-03-07T00:48:04.663093363Z" level=info msg="StartContainer for \"2e92f4f0185e43a9c48c9c563e845a3b52ac9602aeecf80eb3ee79ce703b2948\"" Mar 7 00:48:04.664774 containerd[1891]: time="2026-03-07T00:48:04.664705847Z" level=info msg="connecting to shim 2e92f4f0185e43a9c48c9c563e845a3b52ac9602aeecf80eb3ee79ce703b2948" address="unix:///run/containerd/s/f49e92aef8042bd6f15b2430ac8a0e75a1e28b0c2f766be7e7870c2a87671cf6" protocol=ttrpc version=3 Mar 7 00:48:04.682794 systemd[1]: Started cri-containerd-2e92f4f0185e43a9c48c9c563e845a3b52ac9602aeecf80eb3ee79ce703b2948.scope - libcontainer container 2e92f4f0185e43a9c48c9c563e845a3b52ac9602aeecf80eb3ee79ce703b2948. Mar 7 00:48:04.751849 containerd[1891]: time="2026-03-07T00:48:04.751469953Z" level=info msg="StartContainer for \"2e92f4f0185e43a9c48c9c563e845a3b52ac9602aeecf80eb3ee79ce703b2948\" returns successfully" Mar 7 00:48:04.780648 systemd[1]: cri-containerd-2e92f4f0185e43a9c48c9c563e845a3b52ac9602aeecf80eb3ee79ce703b2948.scope: Deactivated successfully. Mar 7 00:48:04.783121 containerd[1891]: time="2026-03-07T00:48:04.783081915Z" level=info msg="received container exit event container_id:\"2e92f4f0185e43a9c48c9c563e845a3b52ac9602aeecf80eb3ee79ce703b2948\" id:\"2e92f4f0185e43a9c48c9c563e845a3b52ac9602aeecf80eb3ee79ce703b2948\" pid:4221 exited_at:{seconds:1772844484 nanos:782697470}" Mar 7 00:48:04.800391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e92f4f0185e43a9c48c9c563e845a3b52ac9602aeecf80eb3ee79ce703b2948-rootfs.mount: Deactivated successfully. Mar 7 00:48:04.917957 kubelet[3511]: E0307 00:48:04.917609 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:48:06.032428 containerd[1891]: time="2026-03-07T00:48:06.032196412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 00:48:06.917835 kubelet[3511]: E0307 00:48:06.917733 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:48:08.920038 kubelet[3511]: E0307 00:48:08.919998 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:48:10.149150 containerd[1891]: time="2026-03-07T00:48:10.148570405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:10.151912 containerd[1891]: time="2026-03-07T00:48:10.151880253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 7 00:48:10.155562 containerd[1891]: time="2026-03-07T00:48:10.155537737Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:10.161455 containerd[1891]: time="2026-03-07T00:48:10.161426825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:10.162140 containerd[1891]: time="2026-03-07T00:48:10.162108144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 4.129871179s" Mar 7 00:48:10.162210 containerd[1891]: time="2026-03-07T00:48:10.162144481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 7 00:48:10.173243 containerd[1891]: time="2026-03-07T00:48:10.172798779Z" level=info msg="CreateContainer within sandbox \"8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 00:48:10.197314 containerd[1891]: time="2026-03-07T00:48:10.197283314Z" level=info msg="Container 1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:10.198957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3831431504.mount: Deactivated successfully. Mar 7 00:48:10.217917 containerd[1891]: time="2026-03-07T00:48:10.217868788Z" level=info msg="CreateContainer within sandbox \"8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d\"" Mar 7 00:48:10.219068 containerd[1891]: time="2026-03-07T00:48:10.219041300Z" level=info msg="StartContainer for \"1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d\"" Mar 7 00:48:10.220518 containerd[1891]: time="2026-03-07T00:48:10.220496653Z" level=info msg="connecting to shim 1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d" address="unix:///run/containerd/s/f49e92aef8042bd6f15b2430ac8a0e75a1e28b0c2f766be7e7870c2a87671cf6" protocol=ttrpc version=3 Mar 7 00:48:10.237788 systemd[1]: Started cri-containerd-1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d.scope - libcontainer container 1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d. Mar 7 00:48:10.302516 containerd[1891]: time="2026-03-07T00:48:10.302477723Z" level=info msg="StartContainer for \"1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d\" returns successfully" Mar 7 00:48:10.919672 kubelet[3511]: E0307 00:48:10.918989 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:48:12.271199 systemd[1]: cri-containerd-1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d.scope: Deactivated successfully. Mar 7 00:48:12.272743 systemd[1]: cri-containerd-1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d.scope: Consumed 342ms CPU time, 187.1M memory peak, 304K read from disk, 171.3M written to disk. Mar 7 00:48:12.285505 containerd[1891]: time="2026-03-07T00:48:12.285468800Z" level=info msg="received container exit event container_id:\"1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d\" id:\"1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d\" pid:4277 exited_at:{seconds:1772844492 nanos:285290674}" Mar 7 00:48:12.325150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1428ecf2541823539468ba18c8b928401516a0b3051f263fd3f60299e058326d-rootfs.mount: Deactivated successfully. Mar 7 00:48:12.354813 kubelet[3511]: I0307 00:48:12.354782 3511 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 00:48:12.423060 systemd[1]: Created slice kubepods-besteffort-pod339721af_1d4c_4dbd_b563_d324a6852e48.slice - libcontainer container kubepods-besteffort-pod339721af_1d4c_4dbd_b563_d324a6852e48.slice. Mar 7 00:48:12.428695 systemd[1]: Created slice kubepods-burstable-pod24211977_da64_4da8_9d07_8dae38677b33.slice - libcontainer container kubepods-burstable-pod24211977_da64_4da8_9d07_8dae38677b33.slice. Mar 7 00:48:12.435407 systemd[1]: Created slice kubepods-besteffort-pod965d8c13_0477_4e0d_9e9c_d7bf9c9caa4c.slice - libcontainer container kubepods-besteffort-pod965d8c13_0477_4e0d_9e9c_d7bf9c9caa4c.slice. Mar 7 00:48:12.442109 systemd[1]: Created slice kubepods-besteffort-podd97a2fa8_2ddc_413b_9031_67d2dd8a9c93.slice - libcontainer container kubepods-besteffort-podd97a2fa8_2ddc_413b_9031_67d2dd8a9c93.slice. Mar 7 00:48:12.445280 kubelet[3511]: I0307 00:48:12.445241 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx9vc\" (UniqueName: \"kubernetes.io/projected/24211977-da64-4da8-9d07-8dae38677b33-kube-api-access-mx9vc\") pod \"coredns-674b8bbfcf-p5trg\" (UID: \"24211977-da64-4da8-9d07-8dae38677b33\") " pod="kube-system/coredns-674b8bbfcf-p5trg" Mar 7 00:48:12.445280 kubelet[3511]: I0307 00:48:12.445271 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86842823-6ca2-4c01-b911-c13ec76466f9-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-nrttm\" (UID: \"86842823-6ca2-4c01-b911-c13ec76466f9\") " pod="calico-system/goldmane-5b85766d88-nrttm" Mar 7 00:48:12.445280 kubelet[3511]: I0307 00:48:12.445282 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5w2\" (UniqueName: \"kubernetes.io/projected/86842823-6ca2-4c01-b911-c13ec76466f9-kube-api-access-nc5w2\") pod \"goldmane-5b85766d88-nrttm\" (UID: \"86842823-6ca2-4c01-b911-c13ec76466f9\") " pod="calico-system/goldmane-5b85766d88-nrttm" Mar 7 00:48:12.445406 kubelet[3511]: I0307 00:48:12.445294 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86842823-6ca2-4c01-b911-c13ec76466f9-config\") pod \"goldmane-5b85766d88-nrttm\" (UID: \"86842823-6ca2-4c01-b911-c13ec76466f9\") " pod="calico-system/goldmane-5b85766d88-nrttm" Mar 7 00:48:12.445406 kubelet[3511]: I0307 00:48:12.445306 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d97a2fa8-2ddc-413b-9031-67d2dd8a9c93-calico-apiserver-certs\") pod \"calico-apiserver-6f54844f5c-wxlm6\" (UID: \"d97a2fa8-2ddc-413b-9031-67d2dd8a9c93\") " pod="calico-system/calico-apiserver-6f54844f5c-wxlm6" Mar 7 00:48:12.445406 kubelet[3511]: I0307 00:48:12.445318 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/339721af-1d4c-4dbd-b563-d324a6852e48-calico-apiserver-certs\") pod \"calico-apiserver-6f54844f5c-cq7pk\" (UID: \"339721af-1d4c-4dbd-b563-d324a6852e48\") " pod="calico-system/calico-apiserver-6f54844f5c-cq7pk" Mar 7 00:48:12.445406 kubelet[3511]: I0307 00:48:12.445327 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24211977-da64-4da8-9d07-8dae38677b33-config-volume\") pod \"coredns-674b8bbfcf-p5trg\" (UID: \"24211977-da64-4da8-9d07-8dae38677b33\") " pod="kube-system/coredns-674b8bbfcf-p5trg" Mar 7 00:48:12.445406 kubelet[3511]: I0307 00:48:12.445338 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c-tigera-ca-bundle\") pod \"calico-kube-controllers-55cd6f7bcf-v49lv\" (UID: \"965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c\") " pod="calico-system/calico-kube-controllers-55cd6f7bcf-v49lv" Mar 7 00:48:12.445486 kubelet[3511]: I0307 00:48:12.445347 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9tt\" (UniqueName: \"kubernetes.io/projected/965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c-kube-api-access-gr9tt\") pod \"calico-kube-controllers-55cd6f7bcf-v49lv\" (UID: \"965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c\") " pod="calico-system/calico-kube-controllers-55cd6f7bcf-v49lv" Mar 7 00:48:12.445486 kubelet[3511]: I0307 00:48:12.445364 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/86842823-6ca2-4c01-b911-c13ec76466f9-goldmane-key-pair\") pod \"goldmane-5b85766d88-nrttm\" (UID: \"86842823-6ca2-4c01-b911-c13ec76466f9\") " pod="calico-system/goldmane-5b85766d88-nrttm" Mar 7 00:48:12.445486 kubelet[3511]: I0307 00:48:12.445374 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trzlc\" (UniqueName: \"kubernetes.io/projected/d97a2fa8-2ddc-413b-9031-67d2dd8a9c93-kube-api-access-trzlc\") pod \"calico-apiserver-6f54844f5c-wxlm6\" (UID: \"d97a2fa8-2ddc-413b-9031-67d2dd8a9c93\") " pod="calico-system/calico-apiserver-6f54844f5c-wxlm6" Mar 7 00:48:12.445486 kubelet[3511]: I0307 00:48:12.445382 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcm9m\" (UniqueName: \"kubernetes.io/projected/339721af-1d4c-4dbd-b563-d324a6852e48-kube-api-access-qcm9m\") pod \"calico-apiserver-6f54844f5c-cq7pk\" (UID: \"339721af-1d4c-4dbd-b563-d324a6852e48\") " pod="calico-system/calico-apiserver-6f54844f5c-cq7pk" Mar 7 00:48:12.449958 systemd[1]: Created slice kubepods-besteffort-pod86842823_6ca2_4c01_b911_c13ec76466f9.slice - libcontainer container kubepods-besteffort-pod86842823_6ca2_4c01_b911_c13ec76466f9.slice. Mar 7 00:48:12.464465 systemd[1]: Created slice kubepods-besteffort-pod4e137df7_6955_4394_b633_76879fa0fd1c.slice - libcontainer container kubepods-besteffort-pod4e137df7_6955_4394_b633_76879fa0fd1c.slice. Mar 7 00:48:12.469194 systemd[1]: Created slice kubepods-burstable-pod8abddb19_3efc_447e_bb6a_3f700f287c72.slice - libcontainer container kubepods-burstable-pod8abddb19_3efc_447e_bb6a_3f700f287c72.slice. Mar 7 00:48:12.546582 kubelet[3511]: I0307 00:48:12.546453 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4e137df7-6955-4394-b633-76879fa0fd1c-nginx-config\") pod \"whisker-5c764cbdd4-x2wl8\" (UID: \"4e137df7-6955-4394-b633-76879fa0fd1c\") " pod="calico-system/whisker-5c764cbdd4-x2wl8" Mar 7 00:48:12.546582 kubelet[3511]: I0307 00:48:12.546508 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d88jn\" (UniqueName: \"kubernetes.io/projected/4e137df7-6955-4394-b633-76879fa0fd1c-kube-api-access-d88jn\") pod \"whisker-5c764cbdd4-x2wl8\" (UID: \"4e137df7-6955-4394-b633-76879fa0fd1c\") " pod="calico-system/whisker-5c764cbdd4-x2wl8" Mar 7 00:48:12.546752 kubelet[3511]: I0307 00:48:12.546546 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e137df7-6955-4394-b633-76879fa0fd1c-whisker-ca-bundle\") pod \"whisker-5c764cbdd4-x2wl8\" (UID: \"4e137df7-6955-4394-b633-76879fa0fd1c\") " pod="calico-system/whisker-5c764cbdd4-x2wl8" Mar 7 00:48:12.547533 kubelet[3511]: I0307 00:48:12.547397 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4e137df7-6955-4394-b633-76879fa0fd1c-whisker-backend-key-pair\") pod \"whisker-5c764cbdd4-x2wl8\" (UID: \"4e137df7-6955-4394-b633-76879fa0fd1c\") " pod="calico-system/whisker-5c764cbdd4-x2wl8" Mar 7 00:48:12.547533 kubelet[3511]: I0307 00:48:12.547428 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8abddb19-3efc-447e-bb6a-3f700f287c72-config-volume\") pod \"coredns-674b8bbfcf-6cgbl\" (UID: \"8abddb19-3efc-447e-bb6a-3f700f287c72\") " pod="kube-system/coredns-674b8bbfcf-6cgbl" Mar 7 00:48:12.547533 kubelet[3511]: I0307 00:48:12.547441 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mth7p\" (UniqueName: \"kubernetes.io/projected/8abddb19-3efc-447e-bb6a-3f700f287c72-kube-api-access-mth7p\") pod \"coredns-674b8bbfcf-6cgbl\" (UID: \"8abddb19-3efc-447e-bb6a-3f700f287c72\") " pod="kube-system/coredns-674b8bbfcf-6cgbl" Mar 7 00:48:12.726735 containerd[1891]: time="2026-03-07T00:48:12.726692068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f54844f5c-cq7pk,Uid:339721af-1d4c-4dbd-b563-d324a6852e48,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:12.733736 containerd[1891]: time="2026-03-07T00:48:12.733704674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p5trg,Uid:24211977-da64-4da8-9d07-8dae38677b33,Namespace:kube-system,Attempt:0,}" Mar 7 00:48:12.737671 containerd[1891]: time="2026-03-07T00:48:12.737634952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cd6f7bcf-v49lv,Uid:965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:12.746461 containerd[1891]: time="2026-03-07T00:48:12.746434346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f54844f5c-wxlm6,Uid:d97a2fa8-2ddc-413b-9031-67d2dd8a9c93,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:12.762312 containerd[1891]: time="2026-03-07T00:48:12.762182065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nrttm,Uid:86842823-6ca2-4c01-b911-c13ec76466f9,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:12.768992 containerd[1891]: time="2026-03-07T00:48:12.768966551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c764cbdd4-x2wl8,Uid:4e137df7-6955-4394-b633-76879fa0fd1c,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:12.773635 containerd[1891]: time="2026-03-07T00:48:12.773600788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6cgbl,Uid:8abddb19-3efc-447e-bb6a-3f700f287c72,Namespace:kube-system,Attempt:0,}" Mar 7 00:48:12.849606 containerd[1891]: time="2026-03-07T00:48:12.849419321Z" level=error msg="Failed to destroy network for sandbox \"babecd195b492970cfd7ff49fb929fe2ab3ffb5e27a61995b351d447daac1809\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.854290 containerd[1891]: time="2026-03-07T00:48:12.854193427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cd6f7bcf-v49lv,Uid:965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"babecd195b492970cfd7ff49fb929fe2ab3ffb5e27a61995b351d447daac1809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.854774 kubelet[3511]: E0307 00:48:12.854626 3511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babecd195b492970cfd7ff49fb929fe2ab3ffb5e27a61995b351d447daac1809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.854774 kubelet[3511]: E0307 00:48:12.854707 3511 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babecd195b492970cfd7ff49fb929fe2ab3ffb5e27a61995b351d447daac1809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55cd6f7bcf-v49lv" Mar 7 00:48:12.854774 kubelet[3511]: E0307 00:48:12.854738 3511 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babecd195b492970cfd7ff49fb929fe2ab3ffb5e27a61995b351d447daac1809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55cd6f7bcf-v49lv" Mar 7 00:48:12.855061 kubelet[3511]: E0307 00:48:12.854932 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55cd6f7bcf-v49lv_calico-system(965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55cd6f7bcf-v49lv_calico-system(965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"babecd195b492970cfd7ff49fb929fe2ab3ffb5e27a61995b351d447daac1809\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55cd6f7bcf-v49lv" podUID="965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c" Mar 7 00:48:12.881586 containerd[1891]: time="2026-03-07T00:48:12.881542155Z" level=error msg="Failed to destroy network for sandbox \"dcb928eff30dcc1274babf47ea348088ab70a7d37495a9747e200ba81ddc76a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.883100 containerd[1891]: time="2026-03-07T00:48:12.883069919Z" level=error msg="Failed to destroy network for sandbox \"71612ec6f174494e6fc337f58ac0dd1f96c8b38ea34988d6e4f216af87a79151\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.885810 containerd[1891]: time="2026-03-07T00:48:12.885773291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p5trg,Uid:24211977-da64-4da8-9d07-8dae38677b33,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb928eff30dcc1274babf47ea348088ab70a7d37495a9747e200ba81ddc76a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.886218 kubelet[3511]: E0307 00:48:12.886184 3511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb928eff30dcc1274babf47ea348088ab70a7d37495a9747e200ba81ddc76a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.886280 kubelet[3511]: E0307 00:48:12.886238 3511 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb928eff30dcc1274babf47ea348088ab70a7d37495a9747e200ba81ddc76a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p5trg" Mar 7 00:48:12.886280 kubelet[3511]: E0307 00:48:12.886254 3511 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb928eff30dcc1274babf47ea348088ab70a7d37495a9747e200ba81ddc76a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p5trg" Mar 7 00:48:12.886329 kubelet[3511]: E0307 00:48:12.886308 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p5trg_kube-system(24211977-da64-4da8-9d07-8dae38677b33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p5trg_kube-system(24211977-da64-4da8-9d07-8dae38677b33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcb928eff30dcc1274babf47ea348088ab70a7d37495a9747e200ba81ddc76a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p5trg" podUID="24211977-da64-4da8-9d07-8dae38677b33" Mar 7 00:48:12.891473 containerd[1891]: time="2026-03-07T00:48:12.891429771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f54844f5c-cq7pk,Uid:339721af-1d4c-4dbd-b563-d324a6852e48,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71612ec6f174494e6fc337f58ac0dd1f96c8b38ea34988d6e4f216af87a79151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.891773 kubelet[3511]: E0307 00:48:12.891646 3511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71612ec6f174494e6fc337f58ac0dd1f96c8b38ea34988d6e4f216af87a79151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.891773 kubelet[3511]: E0307 00:48:12.891698 3511 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71612ec6f174494e6fc337f58ac0dd1f96c8b38ea34988d6e4f216af87a79151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f54844f5c-cq7pk" Mar 7 00:48:12.891773 kubelet[3511]: E0307 00:48:12.891711 3511 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71612ec6f174494e6fc337f58ac0dd1f96c8b38ea34988d6e4f216af87a79151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f54844f5c-cq7pk" Mar 7 00:48:12.891866 kubelet[3511]: E0307 00:48:12.891742 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f54844f5c-cq7pk_calico-system(339721af-1d4c-4dbd-b563-d324a6852e48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f54844f5c-cq7pk_calico-system(339721af-1d4c-4dbd-b563-d324a6852e48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71612ec6f174494e6fc337f58ac0dd1f96c8b38ea34988d6e4f216af87a79151\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f54844f5c-cq7pk" podUID="339721af-1d4c-4dbd-b563-d324a6852e48" Mar 7 00:48:12.917298 containerd[1891]: time="2026-03-07T00:48:12.917246695Z" level=error msg="Failed to destroy network for sandbox \"ce43988371650e26fe3030893d62bdbdf6dbac17ac723e19427d76ee47bbca73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.924166 containerd[1891]: time="2026-03-07T00:48:12.922207199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f54844f5c-wxlm6,Uid:d97a2fa8-2ddc-413b-9031-67d2dd8a9c93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce43988371650e26fe3030893d62bdbdf6dbac17ac723e19427d76ee47bbca73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.924288 kubelet[3511]: E0307 00:48:12.922571 3511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce43988371650e26fe3030893d62bdbdf6dbac17ac723e19427d76ee47bbca73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.924288 kubelet[3511]: E0307 00:48:12.922611 3511 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce43988371650e26fe3030893d62bdbdf6dbac17ac723e19427d76ee47bbca73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f54844f5c-wxlm6" Mar 7 00:48:12.924288 kubelet[3511]: E0307 00:48:12.922626 3511 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce43988371650e26fe3030893d62bdbdf6dbac17ac723e19427d76ee47bbca73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f54844f5c-wxlm6" Mar 7 00:48:12.924365 kubelet[3511]: E0307 00:48:12.922684 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f54844f5c-wxlm6_calico-system(d97a2fa8-2ddc-413b-9031-67d2dd8a9c93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f54844f5c-wxlm6_calico-system(d97a2fa8-2ddc-413b-9031-67d2dd8a9c93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce43988371650e26fe3030893d62bdbdf6dbac17ac723e19427d76ee47bbca73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f54844f5c-wxlm6" podUID="d97a2fa8-2ddc-413b-9031-67d2dd8a9c93" Mar 7 00:48:12.928941 systemd[1]: Created slice kubepods-besteffort-podb1c239e1_f122_4450_a1a5_965f8a8b2b49.slice - libcontainer container kubepods-besteffort-podb1c239e1_f122_4450_a1a5_965f8a8b2b49.slice. Mar 7 00:48:12.935492 containerd[1891]: time="2026-03-07T00:48:12.935449048Z" level=error msg="Failed to destroy network for sandbox \"303b678473230701f5c6db140c8d96fff47ad0a989543dba6115311865166b5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.938570 containerd[1891]: time="2026-03-07T00:48:12.938534681Z" level=error msg="Failed to destroy network for sandbox \"76fe72037332eb1b9398cc68495f4243a266aefae5c6d694d38f43cd6b8aa381\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.938868 containerd[1891]: time="2026-03-07T00:48:12.938758857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nrttm,Uid:86842823-6ca2-4c01-b911-c13ec76466f9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"303b678473230701f5c6db140c8d96fff47ad0a989543dba6115311865166b5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.939271 kubelet[3511]: E0307 00:48:12.939247 3511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303b678473230701f5c6db140c8d96fff47ad0a989543dba6115311865166b5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.939489 kubelet[3511]: E0307 00:48:12.939472 3511 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303b678473230701f5c6db140c8d96fff47ad0a989543dba6115311865166b5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-nrttm" Mar 7 00:48:12.939633 containerd[1891]: time="2026-03-07T00:48:12.939516250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cs64,Uid:b1c239e1-f122-4450-a1a5-965f8a8b2b49,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:12.939768 kubelet[3511]: E0307 00:48:12.939748 3511 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303b678473230701f5c6db140c8d96fff47ad0a989543dba6115311865166b5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-nrttm" Mar 7 00:48:12.939900 kubelet[3511]: E0307 00:48:12.939875 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-nrttm_calico-system(86842823-6ca2-4c01-b911-c13ec76466f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-nrttm_calico-system(86842823-6ca2-4c01-b911-c13ec76466f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"303b678473230701f5c6db140c8d96fff47ad0a989543dba6115311865166b5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-nrttm" podUID="86842823-6ca2-4c01-b911-c13ec76466f9" Mar 7 00:48:12.944094 containerd[1891]: time="2026-03-07T00:48:12.942704575Z" level=error msg="Failed to destroy network for sandbox \"570b5783444f4700c2fdb834898d1ffe32393c7fd3d2605a5a622472a37bd597\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.944094 containerd[1891]: time="2026-03-07T00:48:12.942902213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6cgbl,Uid:8abddb19-3efc-447e-bb6a-3f700f287c72,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76fe72037332eb1b9398cc68495f4243a266aefae5c6d694d38f43cd6b8aa381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.944394 kubelet[3511]: E0307 00:48:12.944363 3511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76fe72037332eb1b9398cc68495f4243a266aefae5c6d694d38f43cd6b8aa381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.944497 kubelet[3511]: E0307 00:48:12.944482 3511 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76fe72037332eb1b9398cc68495f4243a266aefae5c6d694d38f43cd6b8aa381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6cgbl" Mar 7 00:48:12.944557 kubelet[3511]: E0307 00:48:12.944540 3511 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76fe72037332eb1b9398cc68495f4243a266aefae5c6d694d38f43cd6b8aa381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6cgbl" Mar 7 00:48:12.944645 kubelet[3511]: E0307 00:48:12.944627 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6cgbl_kube-system(8abddb19-3efc-447e-bb6a-3f700f287c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6cgbl_kube-system(8abddb19-3efc-447e-bb6a-3f700f287c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76fe72037332eb1b9398cc68495f4243a266aefae5c6d694d38f43cd6b8aa381\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6cgbl" podUID="8abddb19-3efc-447e-bb6a-3f700f287c72" Mar 7 00:48:12.946856 containerd[1891]: time="2026-03-07T00:48:12.946829299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c764cbdd4-x2wl8,Uid:4e137df7-6955-4394-b633-76879fa0fd1c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"570b5783444f4700c2fdb834898d1ffe32393c7fd3d2605a5a622472a37bd597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.947083 kubelet[3511]: E0307 00:48:12.947057 3511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"570b5783444f4700c2fdb834898d1ffe32393c7fd3d2605a5a622472a37bd597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:12.947139 kubelet[3511]: E0307 00:48:12.947092 3511 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"570b5783444f4700c2fdb834898d1ffe32393c7fd3d2605a5a622472a37bd597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c764cbdd4-x2wl8" Mar 7 00:48:12.947139 kubelet[3511]: E0307 00:48:12.947105 3511 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"570b5783444f4700c2fdb834898d1ffe32393c7fd3d2605a5a622472a37bd597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c764cbdd4-x2wl8" Mar 7 00:48:12.947175 kubelet[3511]: E0307 00:48:12.947140 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c764cbdd4-x2wl8_calico-system(4e137df7-6955-4394-b633-76879fa0fd1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c764cbdd4-x2wl8_calico-system(4e137df7-6955-4394-b633-76879fa0fd1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"570b5783444f4700c2fdb834898d1ffe32393c7fd3d2605a5a622472a37bd597\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c764cbdd4-x2wl8" podUID="4e137df7-6955-4394-b633-76879fa0fd1c" Mar 7 00:48:13.000564 containerd[1891]: time="2026-03-07T00:48:13.000510272Z" level=error msg="Failed to destroy network for sandbox \"e050a0a463988ce1a664bc13733accca102abdbfdfce6c22a5ebc1c94a6c9e82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:13.004035 containerd[1891]: time="2026-03-07T00:48:13.003994422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cs64,Uid:b1c239e1-f122-4450-a1a5-965f8a8b2b49,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e050a0a463988ce1a664bc13733accca102abdbfdfce6c22a5ebc1c94a6c9e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:13.004642 kubelet[3511]: E0307 00:48:13.004209 3511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e050a0a463988ce1a664bc13733accca102abdbfdfce6c22a5ebc1c94a6c9e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:48:13.004642 kubelet[3511]: E0307 00:48:13.004267 3511 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e050a0a463988ce1a664bc13733accca102abdbfdfce6c22a5ebc1c94a6c9e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7cs64" Mar 7 00:48:13.004642 kubelet[3511]: E0307 00:48:13.004286 3511 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e050a0a463988ce1a664bc13733accca102abdbfdfce6c22a5ebc1c94a6c9e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7cs64" Mar 7 00:48:13.004779 kubelet[3511]: E0307 00:48:13.004330 3511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7cs64_calico-system(b1c239e1-f122-4450-a1a5-965f8a8b2b49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7cs64_calico-system(b1c239e1-f122-4450-a1a5-965f8a8b2b49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e050a0a463988ce1a664bc13733accca102abdbfdfce6c22a5ebc1c94a6c9e82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7cs64" podUID="b1c239e1-f122-4450-a1a5-965f8a8b2b49" Mar 7 00:48:13.064341 containerd[1891]: time="2026-03-07T00:48:13.064303357Z" level=info msg="CreateContainer within sandbox \"8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 00:48:13.085264 containerd[1891]: time="2026-03-07T00:48:13.084450953Z" level=info msg="Container c4a782545390a239730f9d9192e344ee72607ca239d81e6fa0296224e0eca487: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:13.101514 containerd[1891]: time="2026-03-07T00:48:13.101393960Z" level=info msg="CreateContainer within sandbox \"8098cb3061c6ce03697a1b65f7b38e2df031ed7e966c0069a649cdb63197200c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c4a782545390a239730f9d9192e344ee72607ca239d81e6fa0296224e0eca487\"" Mar 7 00:48:13.102849 containerd[1891]: time="2026-03-07T00:48:13.102822360Z" level=info msg="StartContainer for \"c4a782545390a239730f9d9192e344ee72607ca239d81e6fa0296224e0eca487\"" Mar 7 00:48:13.104095 containerd[1891]: time="2026-03-07T00:48:13.104068603Z" level=info msg="connecting to shim c4a782545390a239730f9d9192e344ee72607ca239d81e6fa0296224e0eca487" address="unix:///run/containerd/s/f49e92aef8042bd6f15b2430ac8a0e75a1e28b0c2f766be7e7870c2a87671cf6" protocol=ttrpc version=3 Mar 7 00:48:13.120796 systemd[1]: Started cri-containerd-c4a782545390a239730f9d9192e344ee72607ca239d81e6fa0296224e0eca487.scope - libcontainer container c4a782545390a239730f9d9192e344ee72607ca239d81e6fa0296224e0eca487. Mar 7 00:48:13.185635 containerd[1891]: time="2026-03-07T00:48:13.185585241Z" level=info msg="StartContainer for \"c4a782545390a239730f9d9192e344ee72607ca239d81e6fa0296224e0eca487\" returns successfully" Mar 7 00:48:13.352650 kubelet[3511]: I0307 00:48:13.351860 3511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4e137df7-6955-4394-b633-76879fa0fd1c-nginx-config\") pod \"4e137df7-6955-4394-b633-76879fa0fd1c\" (UID: \"4e137df7-6955-4394-b633-76879fa0fd1c\") " Mar 7 00:48:13.352650 kubelet[3511]: I0307 00:48:13.351906 3511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d88jn\" (UniqueName: \"kubernetes.io/projected/4e137df7-6955-4394-b633-76879fa0fd1c-kube-api-access-d88jn\") pod \"4e137df7-6955-4394-b633-76879fa0fd1c\" (UID: \"4e137df7-6955-4394-b633-76879fa0fd1c\") " Mar 7 00:48:13.352650 kubelet[3511]: I0307 00:48:13.351933 3511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4e137df7-6955-4394-b633-76879fa0fd1c-whisker-backend-key-pair\") pod \"4e137df7-6955-4394-b633-76879fa0fd1c\" (UID: \"4e137df7-6955-4394-b633-76879fa0fd1c\") " Mar 7 00:48:13.352650 kubelet[3511]: I0307 00:48:13.351948 3511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e137df7-6955-4394-b633-76879fa0fd1c-whisker-ca-bundle\") pod \"4e137df7-6955-4394-b633-76879fa0fd1c\" (UID: \"4e137df7-6955-4394-b633-76879fa0fd1c\") " Mar 7 00:48:13.352650 kubelet[3511]: I0307 00:48:13.352214 3511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e137df7-6955-4394-b633-76879fa0fd1c-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "4e137df7-6955-4394-b633-76879fa0fd1c" (UID: "4e137df7-6955-4394-b633-76879fa0fd1c"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:48:13.352847 kubelet[3511]: I0307 00:48:13.352235 3511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e137df7-6955-4394-b633-76879fa0fd1c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4e137df7-6955-4394-b633-76879fa0fd1c" (UID: "4e137df7-6955-4394-b633-76879fa0fd1c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:48:13.357455 systemd[1]: var-lib-kubelet-pods-4e137df7\x2d6955\x2d4394\x2db633\x2d76879fa0fd1c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 00:48:13.359840 kubelet[3511]: I0307 00:48:13.359770 3511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e137df7-6955-4394-b633-76879fa0fd1c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4e137df7-6955-4394-b633-76879fa0fd1c" (UID: "4e137df7-6955-4394-b633-76879fa0fd1c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 00:48:13.361323 kubelet[3511]: I0307 00:48:13.360486 3511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e137df7-6955-4394-b633-76879fa0fd1c-kube-api-access-d88jn" (OuterVolumeSpecName: "kube-api-access-d88jn") pod "4e137df7-6955-4394-b633-76879fa0fd1c" (UID: "4e137df7-6955-4394-b633-76879fa0fd1c"). InnerVolumeSpecName "kube-api-access-d88jn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:48:13.360953 systemd[1]: var-lib-kubelet-pods-4e137df7\x2d6955\x2d4394\x2db633\x2d76879fa0fd1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd88jn.mount: Deactivated successfully. Mar 7 00:48:13.453073 kubelet[3511]: I0307 00:48:13.453000 3511 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e137df7-6955-4394-b633-76879fa0fd1c-whisker-ca-bundle\") on node \"ci-4459.2.3-n-9877c76adf\" DevicePath \"\"" Mar 7 00:48:13.453073 kubelet[3511]: I0307 00:48:13.453036 3511 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4e137df7-6955-4394-b633-76879fa0fd1c-nginx-config\") on node \"ci-4459.2.3-n-9877c76adf\" DevicePath \"\"" Mar 7 00:48:13.453073 kubelet[3511]: I0307 00:48:13.453044 3511 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d88jn\" (UniqueName: \"kubernetes.io/projected/4e137df7-6955-4394-b633-76879fa0fd1c-kube-api-access-d88jn\") on node \"ci-4459.2.3-n-9877c76adf\" DevicePath \"\"" Mar 7 00:48:13.453073 kubelet[3511]: I0307 00:48:13.453051 3511 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4e137df7-6955-4394-b633-76879fa0fd1c-whisker-backend-key-pair\") on node \"ci-4459.2.3-n-9877c76adf\" DevicePath \"\"" Mar 7 00:48:14.059738 systemd[1]: Removed slice kubepods-besteffort-pod4e137df7_6955_4394_b633_76879fa0fd1c.slice - libcontainer container kubepods-besteffort-pod4e137df7_6955_4394_b633_76879fa0fd1c.slice. Mar 7 00:48:14.098452 kubelet[3511]: I0307 00:48:14.097036 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fsg7p" podStartSLOduration=5.181093461 podStartE2EDuration="24.097018699s" podCreationTimestamp="2026-03-07 00:47:50 +0000 UTC" firstStartedPulling="2026-03-07 00:47:51.249079308 +0000 UTC m=+18.415591680" lastFinishedPulling="2026-03-07 00:48:10.165004546 +0000 UTC m=+37.331516918" observedRunningTime="2026-03-07 00:48:14.075560898 +0000 UTC m=+41.242073270" watchObservedRunningTime="2026-03-07 00:48:14.097018699 +0000 UTC m=+41.263531063" Mar 7 00:48:14.173962 systemd[1]: Created slice kubepods-besteffort-podff32b394_0e21_4e44_8f51_11643144e876.slice - libcontainer container kubepods-besteffort-podff32b394_0e21_4e44_8f51_11643144e876.slice. Mar 7 00:48:14.257898 kubelet[3511]: I0307 00:48:14.257809 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ff32b394-0e21-4e44-8f51-11643144e876-nginx-config\") pod \"whisker-6b5b7b7649-b94kn\" (UID: \"ff32b394-0e21-4e44-8f51-11643144e876\") " pod="calico-system/whisker-6b5b7b7649-b94kn" Mar 7 00:48:14.258196 kubelet[3511]: I0307 00:48:14.258083 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff32b394-0e21-4e44-8f51-11643144e876-whisker-ca-bundle\") pod \"whisker-6b5b7b7649-b94kn\" (UID: \"ff32b394-0e21-4e44-8f51-11643144e876\") " pod="calico-system/whisker-6b5b7b7649-b94kn" Mar 7 00:48:14.258196 kubelet[3511]: I0307 00:48:14.258111 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ff32b394-0e21-4e44-8f51-11643144e876-whisker-backend-key-pair\") pod \"whisker-6b5b7b7649-b94kn\" (UID: \"ff32b394-0e21-4e44-8f51-11643144e876\") " pod="calico-system/whisker-6b5b7b7649-b94kn" Mar 7 00:48:14.258196 kubelet[3511]: I0307 00:48:14.258157 3511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qtmq\" (UniqueName: \"kubernetes.io/projected/ff32b394-0e21-4e44-8f51-11643144e876-kube-api-access-8qtmq\") pod \"whisker-6b5b7b7649-b94kn\" (UID: \"ff32b394-0e21-4e44-8f51-11643144e876\") " pod="calico-system/whisker-6b5b7b7649-b94kn" Mar 7 00:48:14.477380 containerd[1891]: time="2026-03-07T00:48:14.477328820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b5b7b7649-b94kn,Uid:ff32b394-0e21-4e44-8f51-11643144e876,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:14.643880 systemd-networkd[1477]: cali87095f3c774: Link UP Mar 7 00:48:14.645222 systemd-networkd[1477]: cali87095f3c774: Gained carrier Mar 7 00:48:14.668132 containerd[1891]: 2026-03-07 00:48:14.516 [ERROR][4631] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:48:14.668132 containerd[1891]: 2026-03-07 00:48:14.530 [INFO][4631] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0 whisker-6b5b7b7649- calico-system ff32b394-0e21-4e44-8f51-11643144e876 895 0 2026-03-07 00:48:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b5b7b7649 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.3-n-9877c76adf whisker-6b5b7b7649-b94kn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali87095f3c774 [] [] }} ContainerID="b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" Namespace="calico-system" Pod="whisker-6b5b7b7649-b94kn" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-" Mar 7 00:48:14.668132 containerd[1891]: 2026-03-07 00:48:14.530 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" Namespace="calico-system" Pod="whisker-6b5b7b7649-b94kn" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0" Mar 7 00:48:14.668132 containerd[1891]: 2026-03-07 00:48:14.567 [INFO][4682] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" HandleID="k8s-pod-network.b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" Workload="ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0" Mar 7 00:48:14.668343 containerd[1891]: 2026-03-07 00:48:14.574 [INFO][4682] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" HandleID="k8s-pod-network.b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" Workload="ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed510), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-n-9877c76adf", "pod":"whisker-6b5b7b7649-b94kn", "timestamp":"2026-03-07 00:48:14.567188717 +0000 UTC"}, Hostname:"ci-4459.2.3-n-9877c76adf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001e8dc0)} Mar 7 00:48:14.668343 containerd[1891]: 2026-03-07 00:48:14.574 [INFO][4682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:48:14.668343 containerd[1891]: 2026-03-07 00:48:14.574 [INFO][4682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:48:14.668343 containerd[1891]: 2026-03-07 00:48:14.574 [INFO][4682] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-9877c76adf' Mar 7 00:48:14.668343 containerd[1891]: 2026-03-07 00:48:14.576 [INFO][4682] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:14.668343 containerd[1891]: 2026-03-07 00:48:14.580 [INFO][4682] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:14.668343 containerd[1891]: 2026-03-07 00:48:14.583 [INFO][4682] ipam/ipam.go 526: Trying affinity for 192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:14.668343 containerd[1891]: 2026-03-07 00:48:14.585 [INFO][4682] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:14.668343 containerd[1891]: 2026-03-07 00:48:14.587 [INFO][4682] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:14.668474 containerd[1891]: 2026-03-07 00:48:14.587 [INFO][4682] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:14.668474 containerd[1891]: 2026-03-07 00:48:14.588 [INFO][4682] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4 Mar 7 00:48:14.668474 containerd[1891]: 2026-03-07 00:48:14.592 [INFO][4682] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:14.668474 containerd[1891]: 2026-03-07 00:48:14.601 [INFO][4682] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.1/26] block=192.168.4.0/26 handle="k8s-pod-network.b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:14.668474 containerd[1891]: 2026-03-07 00:48:14.602 [INFO][4682] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.1/26] handle="k8s-pod-network.b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:14.668474 containerd[1891]: 2026-03-07 00:48:14.602 [INFO][4682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:48:14.668474 containerd[1891]: 2026-03-07 00:48:14.603 [INFO][4682] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.1/26] IPv6=[] ContainerID="b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" HandleID="k8s-pod-network.b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" Workload="ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0" Mar 7 00:48:14.668568 containerd[1891]: 2026-03-07 00:48:14.607 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" Namespace="calico-system" Pod="whisker-6b5b7b7649-b94kn" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0", GenerateName:"whisker-6b5b7b7649-", Namespace:"calico-system", SelfLink:"", UID:"ff32b394-0e21-4e44-8f51-11643144e876", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b5b7b7649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"", Pod:"whisker-6b5b7b7649-b94kn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.4.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali87095f3c774", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:14.668568 containerd[1891]: 2026-03-07 00:48:14.607 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.1/32] ContainerID="b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" Namespace="calico-system" Pod="whisker-6b5b7b7649-b94kn" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0" Mar 7 00:48:14.668625 containerd[1891]: 2026-03-07 00:48:14.607 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87095f3c774 ContainerID="b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" Namespace="calico-system" Pod="whisker-6b5b7b7649-b94kn" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0" Mar 7 00:48:14.668625 containerd[1891]: 2026-03-07 00:48:14.646 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" Namespace="calico-system" Pod="whisker-6b5b7b7649-b94kn" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0" Mar 7 00:48:14.670071 containerd[1891]: 2026-03-07 00:48:14.647 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" Namespace="calico-system" Pod="whisker-6b5b7b7649-b94kn" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0", GenerateName:"whisker-6b5b7b7649-", Namespace:"calico-system", SelfLink:"", UID:"ff32b394-0e21-4e44-8f51-11643144e876", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 48, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b5b7b7649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4", Pod:"whisker-6b5b7b7649-b94kn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.4.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali87095f3c774", MAC:"12:16:46:96:f6:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:14.670817 containerd[1891]: 2026-03-07 00:48:14.663 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" Namespace="calico-system" Pod="whisker-6b5b7b7649-b94kn" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-whisker--6b5b7b7649--b94kn-eth0" Mar 7 00:48:14.744948 containerd[1891]: time="2026-03-07T00:48:14.744397515Z" level=info msg="connecting to shim b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4" address="unix:///run/containerd/s/32cd95fc4c67241d23b4748b580fc4846aa7c1f856d7f04b7e062fdfece168f1" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:48:14.794827 systemd[1]: Started cri-containerd-b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4.scope - libcontainer container b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4. Mar 7 00:48:14.845523 containerd[1891]: time="2026-03-07T00:48:14.845481321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b5b7b7649-b94kn,Uid:ff32b394-0e21-4e44-8f51-11643144e876,Namespace:calico-system,Attempt:0,} returns sandbox id \"b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4\"" Mar 7 00:48:14.847204 containerd[1891]: time="2026-03-07T00:48:14.847141217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 00:48:14.930416 kubelet[3511]: I0307 00:48:14.930241 3511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e137df7-6955-4394-b633-76879fa0fd1c" path="/var/lib/kubelet/pods/4e137df7-6955-4394-b633-76879fa0fd1c/volumes" Mar 7 00:48:15.259465 systemd-networkd[1477]: vxlan.calico: Link UP Mar 7 00:48:15.259477 systemd-networkd[1477]: vxlan.calico: Gained carrier Mar 7 00:48:15.818860 systemd-networkd[1477]: cali87095f3c774: Gained IPv6LL Mar 7 00:48:16.278968 containerd[1891]: time="2026-03-07T00:48:16.278908593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:16.281882 containerd[1891]: time="2026-03-07T00:48:16.281837896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 7 00:48:16.285218 containerd[1891]: time="2026-03-07T00:48:16.285169443Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:16.289931 containerd[1891]: time="2026-03-07T00:48:16.289884396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.442712746s" Mar 7 00:48:16.289931 containerd[1891]: time="2026-03-07T00:48:16.289927325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 7 00:48:16.295734 containerd[1891]: time="2026-03-07T00:48:16.295694543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:16.299488 containerd[1891]: time="2026-03-07T00:48:16.299443281Z" level=info msg="CreateContainer within sandbox \"b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 00:48:16.321882 containerd[1891]: time="2026-03-07T00:48:16.321821052Z" level=info msg="Container e3afba6535dbd4d298256e4609aeecfa55097912e85f24cc03e877ddaa71dc0f: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:16.338887 containerd[1891]: time="2026-03-07T00:48:16.338842122Z" level=info msg="CreateContainer within sandbox \"b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e3afba6535dbd4d298256e4609aeecfa55097912e85f24cc03e877ddaa71dc0f\"" Mar 7 00:48:16.339825 containerd[1891]: time="2026-03-07T00:48:16.339800873Z" level=info msg="StartContainer for \"e3afba6535dbd4d298256e4609aeecfa55097912e85f24cc03e877ddaa71dc0f\"" Mar 7 00:48:16.341215 containerd[1891]: time="2026-03-07T00:48:16.341185453Z" level=info msg="connecting to shim e3afba6535dbd4d298256e4609aeecfa55097912e85f24cc03e877ddaa71dc0f" address="unix:///run/containerd/s/32cd95fc4c67241d23b4748b580fc4846aa7c1f856d7f04b7e062fdfece168f1" protocol=ttrpc version=3 Mar 7 00:48:16.362835 systemd[1]: Started cri-containerd-e3afba6535dbd4d298256e4609aeecfa55097912e85f24cc03e877ddaa71dc0f.scope - libcontainer container e3afba6535dbd4d298256e4609aeecfa55097912e85f24cc03e877ddaa71dc0f. Mar 7 00:48:16.403939 containerd[1891]: time="2026-03-07T00:48:16.403898103Z" level=info msg="StartContainer for \"e3afba6535dbd4d298256e4609aeecfa55097912e85f24cc03e877ddaa71dc0f\" returns successfully" Mar 7 00:48:16.406232 containerd[1891]: time="2026-03-07T00:48:16.406199666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 00:48:16.586903 systemd-networkd[1477]: vxlan.calico: Gained IPv6LL Mar 7 00:48:18.000064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198443146.mount: Deactivated successfully. Mar 7 00:48:18.070744 containerd[1891]: time="2026-03-07T00:48:18.070691638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:18.073525 containerd[1891]: time="2026-03-07T00:48:18.073374684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 7 00:48:18.079766 containerd[1891]: time="2026-03-07T00:48:18.079729472Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:18.091577 containerd[1891]: time="2026-03-07T00:48:18.091510314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:18.092158 containerd[1891]: time="2026-03-07T00:48:18.092039723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.685811872s" Mar 7 00:48:18.092158 containerd[1891]: time="2026-03-07T00:48:18.092063771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 7 00:48:18.108881 containerd[1891]: time="2026-03-07T00:48:18.108844269Z" level=info msg="CreateContainer within sandbox \"b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 00:48:18.135315 containerd[1891]: time="2026-03-07T00:48:18.135184025Z" level=info msg="Container 0387fecd3ba0dfabbc8d06b24ae66fed2e4382c8d728260600780de9aa6917f6: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:18.163991 containerd[1891]: time="2026-03-07T00:48:18.163946827Z" level=info msg="CreateContainer within sandbox \"b968a027795223656053d6a3b532b02ec691d54515809ba24e750393279d96d4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0387fecd3ba0dfabbc8d06b24ae66fed2e4382c8d728260600780de9aa6917f6\"" Mar 7 00:48:18.164769 containerd[1891]: time="2026-03-07T00:48:18.164725204Z" level=info msg="StartContainer for \"0387fecd3ba0dfabbc8d06b24ae66fed2e4382c8d728260600780de9aa6917f6\"" Mar 7 00:48:18.165907 containerd[1891]: time="2026-03-07T00:48:18.165881241Z" level=info msg="connecting to shim 0387fecd3ba0dfabbc8d06b24ae66fed2e4382c8d728260600780de9aa6917f6" address="unix:///run/containerd/s/32cd95fc4c67241d23b4748b580fc4846aa7c1f856d7f04b7e062fdfece168f1" protocol=ttrpc version=3 Mar 7 00:48:18.193813 systemd[1]: Started cri-containerd-0387fecd3ba0dfabbc8d06b24ae66fed2e4382c8d728260600780de9aa6917f6.scope - libcontainer container 0387fecd3ba0dfabbc8d06b24ae66fed2e4382c8d728260600780de9aa6917f6. Mar 7 00:48:18.231449 containerd[1891]: time="2026-03-07T00:48:18.231351316Z" level=info msg="StartContainer for \"0387fecd3ba0dfabbc8d06b24ae66fed2e4382c8d728260600780de9aa6917f6\" returns successfully" Mar 7 00:48:20.588400 kubelet[3511]: I0307 00:48:20.587915 3511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:48:20.701693 kubelet[3511]: I0307 00:48:20.700770 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6b5b7b7649-b94kn" podStartSLOduration=3.454591622 podStartE2EDuration="6.700752825s" podCreationTimestamp="2026-03-07 00:48:14 +0000 UTC" firstStartedPulling="2026-03-07 00:48:14.846636824 +0000 UTC m=+42.013149188" lastFinishedPulling="2026-03-07 00:48:18.092798027 +0000 UTC m=+45.259310391" observedRunningTime="2026-03-07 00:48:19.080803614 +0000 UTC m=+46.247315986" watchObservedRunningTime="2026-03-07 00:48:20.700752825 +0000 UTC m=+47.867265189" Mar 7 00:48:24.918614 containerd[1891]: time="2026-03-07T00:48:24.918493397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f54844f5c-cq7pk,Uid:339721af-1d4c-4dbd-b563-d324a6852e48,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:24.920179 containerd[1891]: time="2026-03-07T00:48:24.918493421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6cgbl,Uid:8abddb19-3efc-447e-bb6a-3f700f287c72,Namespace:kube-system,Attempt:0,}" Mar 7 00:48:24.920179 containerd[1891]: time="2026-03-07T00:48:24.919470709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cd6f7bcf-v49lv,Uid:965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:25.116788 systemd-networkd[1477]: cali5736b838177: Link UP Mar 7 00:48:25.117825 systemd-networkd[1477]: cali5736b838177: Gained carrier Mar 7 00:48:25.138074 containerd[1891]: 2026-03-07 00:48:25.004 [INFO][5026] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0 calico-apiserver-6f54844f5c- calico-system 339721af-1d4c-4dbd-b563-d324a6852e48 836 0 2026-03-07 00:47:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f54844f5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.3-n-9877c76adf calico-apiserver-6f54844f5c-cq7pk eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5736b838177 [] [] }} ContainerID="718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-cq7pk" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-" Mar 7 00:48:25.138074 containerd[1891]: 2026-03-07 00:48:25.004 [INFO][5026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-cq7pk" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0" Mar 7 00:48:25.138074 containerd[1891]: 2026-03-07 00:48:25.046 [INFO][5068] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" HandleID="k8s-pod-network.718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" Workload="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0" Mar 7 00:48:25.138281 containerd[1891]: 2026-03-07 00:48:25.062 [INFO][5068] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" HandleID="k8s-pod-network.718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" Workload="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbd20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-n-9877c76adf", "pod":"calico-apiserver-6f54844f5c-cq7pk", "timestamp":"2026-03-07 00:48:25.046113234 +0000 UTC"}, Hostname:"ci-4459.2.3-n-9877c76adf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Mar 7 00:48:25.138281 containerd[1891]: 2026-03-07 00:48:25.062 [INFO][5068] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:48:25.138281 containerd[1891]: 2026-03-07 00:48:25.062 [INFO][5068] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:48:25.138281 containerd[1891]: 2026-03-07 00:48:25.062 [INFO][5068] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-9877c76adf' Mar 7 00:48:25.138281 containerd[1891]: 2026-03-07 00:48:25.065 [INFO][5068] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.138281 containerd[1891]: 2026-03-07 00:48:25.069 [INFO][5068] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.138281 containerd[1891]: 2026-03-07 00:48:25.074 [INFO][5068] ipam/ipam.go 526: Trying affinity for 192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.138281 containerd[1891]: 2026-03-07 00:48:25.077 [INFO][5068] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.138281 containerd[1891]: 2026-03-07 00:48:25.085 [INFO][5068] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.138423 containerd[1891]: 2026-03-07 00:48:25.085 [INFO][5068] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.138423 containerd[1891]: 2026-03-07 00:48:25.087 [INFO][5068] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904 Mar 7 00:48:25.138423 containerd[1891]: 2026-03-07 00:48:25.098 [INFO][5068] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.138423 containerd[1891]: 2026-03-07 00:48:25.104 [INFO][5068] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.2/26] block=192.168.4.0/26 handle="k8s-pod-network.718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.138423 containerd[1891]: 2026-03-07 00:48:25.105 [INFO][5068] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.2/26] handle="k8s-pod-network.718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.138423 containerd[1891]: 2026-03-07 00:48:25.105 [INFO][5068] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:48:25.138423 containerd[1891]: 2026-03-07 00:48:25.105 [INFO][5068] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.2/26] IPv6=[] ContainerID="718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" HandleID="k8s-pod-network.718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" Workload="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0" Mar 7 00:48:25.138528 containerd[1891]: 2026-03-07 00:48:25.107 [INFO][5026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-cq7pk" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0", GenerateName:"calico-apiserver-6f54844f5c-", Namespace:"calico-system", SelfLink:"", UID:"339721af-1d4c-4dbd-b563-d324a6852e48", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f54844f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"", Pod:"calico-apiserver-6f54844f5c-cq7pk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5736b838177", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:25.138570 containerd[1891]: 2026-03-07 00:48:25.107 [INFO][5026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.2/32] ContainerID="718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-cq7pk" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0" Mar 7 00:48:25.138570 containerd[1891]: 2026-03-07 00:48:25.108 [INFO][5026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5736b838177 ContainerID="718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-cq7pk" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0" Mar 7 00:48:25.138570 containerd[1891]: 2026-03-07 00:48:25.118 [INFO][5026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-cq7pk" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0" Mar 7 00:48:25.138617 containerd[1891]: 2026-03-07 00:48:25.119 [INFO][5026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-cq7pk" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0", GenerateName:"calico-apiserver-6f54844f5c-", Namespace:"calico-system", SelfLink:"", UID:"339721af-1d4c-4dbd-b563-d324a6852e48", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f54844f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904", Pod:"calico-apiserver-6f54844f5c-cq7pk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5736b838177", MAC:"da:1d:8e:8d:20:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:25.138701 containerd[1891]: 2026-03-07 00:48:25.134 [INFO][5026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-cq7pk" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--cq7pk-eth0" Mar 7 00:48:25.191699 containerd[1891]: time="2026-03-07T00:48:25.191201929Z" level=info msg="connecting to shim 718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904" address="unix:///run/containerd/s/12457223f662d4e2dd3bc0f3395af93cd6542de4e9ea13d5c9ac0464ed9a93f7" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:48:25.209272 systemd-networkd[1477]: cali3225af5baff: Link UP Mar 7 00:48:25.209428 systemd-networkd[1477]: cali3225af5baff: Gained carrier Mar 7 00:48:25.231116 containerd[1891]: 2026-03-07 00:48:24.983 [INFO][5036] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0 coredns-674b8bbfcf- kube-system 8abddb19-3efc-447e-bb6a-3f700f287c72 845 0 2026-03-07 00:47:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.3-n-9877c76adf coredns-674b8bbfcf-6cgbl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3225af5baff [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" Namespace="kube-system" Pod="coredns-674b8bbfcf-6cgbl" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-" Mar 7 00:48:25.231116 containerd[1891]: 2026-03-07 00:48:24.986 [INFO][5036] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" Namespace="kube-system" Pod="coredns-674b8bbfcf-6cgbl" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0" Mar 7 00:48:25.231116 containerd[1891]: 2026-03-07 00:48:25.062 [INFO][5063] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" HandleID="k8s-pod-network.6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" Workload="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0" Mar 7 00:48:25.231296 containerd[1891]: 2026-03-07 00:48:25.084 [INFO][5063] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" HandleID="k8s-pod-network.6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" Workload="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebdd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.3-n-9877c76adf", "pod":"coredns-674b8bbfcf-6cgbl", "timestamp":"2026-03-07 00:48:25.062918462 +0000 UTC"}, Hostname:"ci-4459.2.3-n-9877c76adf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004cf1e0)} Mar 7 00:48:25.231296 containerd[1891]: 2026-03-07 00:48:25.085 [INFO][5063] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:48:25.231296 containerd[1891]: 2026-03-07 00:48:25.105 [INFO][5063] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:48:25.231296 containerd[1891]: 2026-03-07 00:48:25.105 [INFO][5063] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-9877c76adf' Mar 7 00:48:25.231296 containerd[1891]: 2026-03-07 00:48:25.165 [INFO][5063] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.231296 containerd[1891]: 2026-03-07 00:48:25.170 [INFO][5063] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.231296 containerd[1891]: 2026-03-07 00:48:25.174 [INFO][5063] ipam/ipam.go 526: Trying affinity for 192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.231296 containerd[1891]: 2026-03-07 00:48:25.176 [INFO][5063] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.231296 containerd[1891]: 2026-03-07 00:48:25.177 [INFO][5063] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.231441 containerd[1891]: 2026-03-07 00:48:25.177 [INFO][5063] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.231441 containerd[1891]: 2026-03-07 00:48:25.181 [INFO][5063] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df Mar 7 00:48:25.231441 containerd[1891]: 2026-03-07 00:48:25.187 [INFO][5063] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.231441 containerd[1891]: 2026-03-07 00:48:25.201 [INFO][5063] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.3/26] block=192.168.4.0/26 handle="k8s-pod-network.6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.231441 containerd[1891]: 2026-03-07 00:48:25.201 [INFO][5063] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.3/26] handle="k8s-pod-network.6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.231441 containerd[1891]: 2026-03-07 00:48:25.201 [INFO][5063] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:48:25.231441 containerd[1891]: 2026-03-07 00:48:25.201 [INFO][5063] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.3/26] IPv6=[] ContainerID="6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" HandleID="k8s-pod-network.6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" Workload="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0" Mar 7 00:48:25.231543 containerd[1891]: 2026-03-07 00:48:25.204 [INFO][5036] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" Namespace="kube-system" Pod="coredns-674b8bbfcf-6cgbl" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8abddb19-3efc-447e-bb6a-3f700f287c72", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"", Pod:"coredns-674b8bbfcf-6cgbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3225af5baff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:25.231543 containerd[1891]: 2026-03-07 00:48:25.204 [INFO][5036] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.3/32] ContainerID="6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" Namespace="kube-system" Pod="coredns-674b8bbfcf-6cgbl" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0" Mar 7 00:48:25.231543 containerd[1891]: 2026-03-07 00:48:25.204 [INFO][5036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3225af5baff ContainerID="6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" Namespace="kube-system" Pod="coredns-674b8bbfcf-6cgbl" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0" Mar 7 00:48:25.231543 containerd[1891]: 2026-03-07 00:48:25.210 [INFO][5036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" Namespace="kube-system" Pod="coredns-674b8bbfcf-6cgbl" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0" Mar 7 00:48:25.231543 containerd[1891]: 2026-03-07 00:48:25.210 [INFO][5036] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" Namespace="kube-system" Pod="coredns-674b8bbfcf-6cgbl" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8abddb19-3efc-447e-bb6a-3f700f287c72", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df", Pod:"coredns-674b8bbfcf-6cgbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3225af5baff", MAC:"9a:29:35:a9:c8:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:25.231543 containerd[1891]: 2026-03-07 00:48:25.225 [INFO][5036] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" Namespace="kube-system" Pod="coredns-674b8bbfcf-6cgbl" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--6cgbl-eth0" Mar 7 00:48:25.236857 systemd[1]: Started cri-containerd-718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904.scope - libcontainer container 718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904. Mar 7 00:48:25.282704 containerd[1891]: time="2026-03-07T00:48:25.282026936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f54844f5c-cq7pk,Uid:339721af-1d4c-4dbd-b563-d324a6852e48,Namespace:calico-system,Attempt:0,} returns sandbox id \"718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904\"" Mar 7 00:48:25.285694 containerd[1891]: time="2026-03-07T00:48:25.285120045Z" level=info msg="connecting to shim 6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df" address="unix:///run/containerd/s/a95d0c65285148bbcab276949262ed14eff3a358c66b3e4f64288fa80918e844" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:48:25.290403 containerd[1891]: time="2026-03-07T00:48:25.290210307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 00:48:25.320164 systemd[1]: Started cri-containerd-6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df.scope - libcontainer container 6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df. Mar 7 00:48:25.327546 systemd-networkd[1477]: cali69dd8b6d3b1: Link UP Mar 7 00:48:25.328973 systemd-networkd[1477]: cali69dd8b6d3b1: Gained carrier Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.010 [INFO][5046] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0 calico-kube-controllers-55cd6f7bcf- calico-system 965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c 842 0 2026-03-07 00:47:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55cd6f7bcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.3-n-9877c76adf calico-kube-controllers-55cd6f7bcf-v49lv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali69dd8b6d3b1 [] [] }} ContainerID="1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" Namespace="calico-system" Pod="calico-kube-controllers-55cd6f7bcf-v49lv" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.010 [INFO][5046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" Namespace="calico-system" Pod="calico-kube-controllers-55cd6f7bcf-v49lv" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.082 [INFO][5073] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" HandleID="k8s-pod-network.1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" Workload="ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.092 [INFO][5073] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" HandleID="k8s-pod-network.1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" Workload="ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-n-9877c76adf", "pod":"calico-kube-controllers-55cd6f7bcf-v49lv", "timestamp":"2026-03-07 00:48:25.0824991 +0000 UTC"}, Hostname:"ci-4459.2.3-n-9877c76adf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186580)} Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.092 [INFO][5073] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.201 [INFO][5073] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.202 [INFO][5073] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-9877c76adf' Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.267 [INFO][5073] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.276 [INFO][5073] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.287 [INFO][5073] ipam/ipam.go 526: Trying affinity for 192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.291 [INFO][5073] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.300 [INFO][5073] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.300 [INFO][5073] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.302 [INFO][5073] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.309 [INFO][5073] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.317 [INFO][5073] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.4/26] block=192.168.4.0/26 handle="k8s-pod-network.1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.318 [INFO][5073] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.4/26] handle="k8s-pod-network.1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.318 [INFO][5073] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:48:25.346837 containerd[1891]: 2026-03-07 00:48:25.318 [INFO][5073] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.4/26] IPv6=[] ContainerID="1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" HandleID="k8s-pod-network.1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" Workload="ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0" Mar 7 00:48:25.347263 containerd[1891]: 2026-03-07 00:48:25.321 [INFO][5046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" Namespace="calico-system" Pod="calico-kube-controllers-55cd6f7bcf-v49lv" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0", GenerateName:"calico-kube-controllers-55cd6f7bcf-", Namespace:"calico-system", SelfLink:"", UID:"965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cd6f7bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"", Pod:"calico-kube-controllers-55cd6f7bcf-v49lv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali69dd8b6d3b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:25.347263 containerd[1891]: 2026-03-07 00:48:25.321 [INFO][5046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.4/32] ContainerID="1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" Namespace="calico-system" Pod="calico-kube-controllers-55cd6f7bcf-v49lv" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0" Mar 7 00:48:25.347263 containerd[1891]: 2026-03-07 00:48:25.321 [INFO][5046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69dd8b6d3b1 ContainerID="1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" Namespace="calico-system" Pod="calico-kube-controllers-55cd6f7bcf-v49lv" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0" Mar 7 00:48:25.347263 containerd[1891]: 2026-03-07 00:48:25.330 [INFO][5046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" Namespace="calico-system" Pod="calico-kube-controllers-55cd6f7bcf-v49lv" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0" Mar 7 00:48:25.347263 containerd[1891]: 2026-03-07 00:48:25.331 [INFO][5046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" Namespace="calico-system" Pod="calico-kube-controllers-55cd6f7bcf-v49lv" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0", GenerateName:"calico-kube-controllers-55cd6f7bcf-", Namespace:"calico-system", SelfLink:"", UID:"965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cd6f7bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d", Pod:"calico-kube-controllers-55cd6f7bcf-v49lv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali69dd8b6d3b1", MAC:"5e:1f:bf:82:07:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:25.347263 containerd[1891]: 2026-03-07 00:48:25.342 [INFO][5046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" Namespace="calico-system" Pod="calico-kube-controllers-55cd6f7bcf-v49lv" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--kube--controllers--55cd6f7bcf--v49lv-eth0" Mar 7 00:48:25.374520 containerd[1891]: time="2026-03-07T00:48:25.374477020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6cgbl,Uid:8abddb19-3efc-447e-bb6a-3f700f287c72,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df\"" Mar 7 00:48:25.384914 containerd[1891]: time="2026-03-07T00:48:25.384866998Z" level=info msg="CreateContainer within sandbox \"6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:48:25.416393 containerd[1891]: time="2026-03-07T00:48:25.415809110Z" level=info msg="Container 885f64d53265016761736847ada12b12e1e08c880a0d913c0227c80918d2f620: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:25.417473 containerd[1891]: time="2026-03-07T00:48:25.417434011Z" level=info msg="connecting to shim 1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d" address="unix:///run/containerd/s/2b6296b8b471c04f01b0a9cf82c99c05da70ca221197ea42a28b2c74e53df2aa" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:48:25.431562 containerd[1891]: time="2026-03-07T00:48:25.431463436Z" level=info msg="CreateContainer within sandbox \"6d2a969255d3494e29e66e8246e82e0260bc24c9a99dcd2e5a97406a6cc148df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"885f64d53265016761736847ada12b12e1e08c880a0d913c0227c80918d2f620\"" Mar 7 00:48:25.432828 systemd[1]: Started cri-containerd-1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d.scope - libcontainer container 1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d. Mar 7 00:48:25.433405 containerd[1891]: time="2026-03-07T00:48:25.432815240Z" level=info msg="StartContainer for \"885f64d53265016761736847ada12b12e1e08c880a0d913c0227c80918d2f620\"" Mar 7 00:48:25.435343 containerd[1891]: time="2026-03-07T00:48:25.435312730Z" level=info msg="connecting to shim 885f64d53265016761736847ada12b12e1e08c880a0d913c0227c80918d2f620" address="unix:///run/containerd/s/a95d0c65285148bbcab276949262ed14eff3a358c66b3e4f64288fa80918e844" protocol=ttrpc version=3 Mar 7 00:48:25.462625 systemd[1]: Started cri-containerd-885f64d53265016761736847ada12b12e1e08c880a0d913c0227c80918d2f620.scope - libcontainer container 885f64d53265016761736847ada12b12e1e08c880a0d913c0227c80918d2f620. Mar 7 00:48:25.501952 containerd[1891]: time="2026-03-07T00:48:25.501906235Z" level=info msg="StartContainer for \"885f64d53265016761736847ada12b12e1e08c880a0d913c0227c80918d2f620\" returns successfully" Mar 7 00:48:25.503925 containerd[1891]: time="2026-03-07T00:48:25.503889836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cd6f7bcf-v49lv,Uid:965d8c13-0477-4e0d-9e9c-d7bf9c9caa4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d\"" Mar 7 00:48:25.918106 containerd[1891]: time="2026-03-07T00:48:25.918060056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cs64,Uid:b1c239e1-f122-4450-a1a5-965f8a8b2b49,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:25.918106 containerd[1891]: time="2026-03-07T00:48:25.918059768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f54844f5c-wxlm6,Uid:d97a2fa8-2ddc-413b-9031-67d2dd8a9c93,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:26.039119 systemd-networkd[1477]: cali2e5f35eefe3: Link UP Mar 7 00:48:26.039774 systemd-networkd[1477]: cali2e5f35eefe3: Gained carrier Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:25.963 [INFO][5323] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0 csi-node-driver- calico-system b1c239e1-f122-4450-a1a5-965f8a8b2b49 688 0 2026-03-07 00:47:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.3-n-9877c76adf csi-node-driver-7cs64 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2e5f35eefe3 [] [] }} ContainerID="f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" Namespace="calico-system" Pod="csi-node-driver-7cs64" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:25.963 [INFO][5323] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" Namespace="calico-system" Pod="csi-node-driver-7cs64" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:25.989 [INFO][5345] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" HandleID="k8s-pod-network.f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" Workload="ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:25.997 [INFO][5345] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" HandleID="k8s-pod-network.f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" Workload="ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbdc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-n-9877c76adf", "pod":"csi-node-driver-7cs64", "timestamp":"2026-03-07 00:48:25.989176445 +0000 UTC"}, Hostname:"ci-4459.2.3-n-9877c76adf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003074a0)} Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:25.997 [INFO][5345] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:25.997 [INFO][5345] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:25.997 [INFO][5345] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-9877c76adf' Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:25.999 [INFO][5345] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:26.004 [INFO][5345] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:26.009 [INFO][5345] ipam/ipam.go 526: Trying affinity for 192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:26.013 [INFO][5345] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:26.017 [INFO][5345] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:26.017 [INFO][5345] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:26.019 [INFO][5345] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:26.024 [INFO][5345] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:26.033 [INFO][5345] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.5/26] block=192.168.4.0/26 handle="k8s-pod-network.f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:26.033 [INFO][5345] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.5/26] handle="k8s-pod-network.f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:26.033 [INFO][5345] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:48:26.058131 containerd[1891]: 2026-03-07 00:48:26.033 [INFO][5345] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.5/26] IPv6=[] ContainerID="f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" HandleID="k8s-pod-network.f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" Workload="ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0" Mar 7 00:48:26.059081 containerd[1891]: 2026-03-07 00:48:26.035 [INFO][5323] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" Namespace="calico-system" Pod="csi-node-driver-7cs64" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b1c239e1-f122-4450-a1a5-965f8a8b2b49", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"", Pod:"csi-node-driver-7cs64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e5f35eefe3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:26.059081 containerd[1891]: 2026-03-07 00:48:26.035 [INFO][5323] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.5/32] ContainerID="f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" Namespace="calico-system" Pod="csi-node-driver-7cs64" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0" Mar 7 00:48:26.059081 containerd[1891]: 2026-03-07 00:48:26.035 [INFO][5323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e5f35eefe3 ContainerID="f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" Namespace="calico-system" Pod="csi-node-driver-7cs64" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0" Mar 7 00:48:26.059081 containerd[1891]: 2026-03-07 00:48:26.041 [INFO][5323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" Namespace="calico-system" Pod="csi-node-driver-7cs64" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0" Mar 7 00:48:26.059081 containerd[1891]: 2026-03-07 00:48:26.042 [INFO][5323] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" Namespace="calico-system" Pod="csi-node-driver-7cs64" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b1c239e1-f122-4450-a1a5-965f8a8b2b49", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce", Pod:"csi-node-driver-7cs64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e5f35eefe3", MAC:"fa:a1:17:4f:e5:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:26.059081 containerd[1891]: 2026-03-07 00:48:26.054 [INFO][5323] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" Namespace="calico-system" Pod="csi-node-driver-7cs64" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-csi--node--driver--7cs64-eth0" Mar 7 00:48:26.102013 containerd[1891]: time="2026-03-07T00:48:26.101853100Z" level=info msg="connecting to shim f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce" address="unix:///run/containerd/s/db8023b4510d031a4f4c253ed552ba81c0f4f70af10ea08b1859c3e204ae8598" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:48:26.136841 systemd[1]: Started cri-containerd-f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce.scope - libcontainer container f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce. Mar 7 00:48:26.140993 kubelet[3511]: I0307 00:48:26.140914 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6cgbl" podStartSLOduration=46.140520639 podStartE2EDuration="46.140520639s" podCreationTimestamp="2026-03-07 00:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:48:26.111981398 +0000 UTC m=+53.278493810" watchObservedRunningTime="2026-03-07 00:48:26.140520639 +0000 UTC m=+53.307033003" Mar 7 00:48:26.185740 systemd-networkd[1477]: cali2b59b95856d: Link UP Mar 7 00:48:26.190683 systemd-networkd[1477]: cali2b59b95856d: Gained carrier Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:25.982 [INFO][5332] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0 calico-apiserver-6f54844f5c- calico-system d97a2fa8-2ddc-413b-9031-67d2dd8a9c93 843 0 2026-03-07 00:47:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f54844f5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.3-n-9877c76adf calico-apiserver-6f54844f5c-wxlm6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali2b59b95856d [] [] }} ContainerID="36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-wxlm6" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:25.983 [INFO][5332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-wxlm6" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.008 [INFO][5351] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" HandleID="k8s-pod-network.36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" Workload="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.018 [INFO][5351] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" HandleID="k8s-pod-network.36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" Workload="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f9e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-n-9877c76adf", "pod":"calico-apiserver-6f54844f5c-wxlm6", "timestamp":"2026-03-07 00:48:26.008498099 +0000 UTC"}, Hostname:"ci-4459.2.3-n-9877c76adf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004b6f20)} Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.018 [INFO][5351] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.033 [INFO][5351] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.033 [INFO][5351] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-9877c76adf' Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.100 [INFO][5351] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.127 [INFO][5351] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.141 [INFO][5351] ipam/ipam.go 526: Trying affinity for 192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.146 [INFO][5351] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.154 [INFO][5351] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.154 [INFO][5351] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.158 [INFO][5351] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440 Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.163 [INFO][5351] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.177 [INFO][5351] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.6/26] block=192.168.4.0/26 handle="k8s-pod-network.36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.177 [INFO][5351] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.6/26] handle="k8s-pod-network.36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.177 [INFO][5351] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:48:26.217416 containerd[1891]: 2026-03-07 00:48:26.177 [INFO][5351] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.6/26] IPv6=[] ContainerID="36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" HandleID="k8s-pod-network.36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" Workload="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0" Mar 7 00:48:26.218348 containerd[1891]: 2026-03-07 00:48:26.181 [INFO][5332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-wxlm6" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0", GenerateName:"calico-apiserver-6f54844f5c-", Namespace:"calico-system", SelfLink:"", UID:"d97a2fa8-2ddc-413b-9031-67d2dd8a9c93", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f54844f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"", Pod:"calico-apiserver-6f54844f5c-wxlm6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2b59b95856d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:26.218348 containerd[1891]: 2026-03-07 00:48:26.181 [INFO][5332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.6/32] ContainerID="36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-wxlm6" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0" Mar 7 00:48:26.218348 containerd[1891]: 2026-03-07 00:48:26.181 [INFO][5332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b59b95856d ContainerID="36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-wxlm6" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0" Mar 7 00:48:26.218348 containerd[1891]: 2026-03-07 00:48:26.191 [INFO][5332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-wxlm6" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0" Mar 7 00:48:26.218348 containerd[1891]: 2026-03-07 00:48:26.192 [INFO][5332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-wxlm6" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0", GenerateName:"calico-apiserver-6f54844f5c-", Namespace:"calico-system", SelfLink:"", UID:"d97a2fa8-2ddc-413b-9031-67d2dd8a9c93", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f54844f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440", Pod:"calico-apiserver-6f54844f5c-wxlm6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2b59b95856d", MAC:"ce:57:c0:2f:79:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:26.218348 containerd[1891]: 2026-03-07 00:48:26.210 [INFO][5332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" Namespace="calico-system" Pod="calico-apiserver-6f54844f5c-wxlm6" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-calico--apiserver--6f54844f5c--wxlm6-eth0" Mar 7 00:48:26.221051 containerd[1891]: time="2026-03-07T00:48:26.221007118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cs64,Uid:b1c239e1-f122-4450-a1a5-965f8a8b2b49,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce\"" Mar 7 00:48:26.250811 systemd-networkd[1477]: cali3225af5baff: Gained IPv6LL Mar 7 00:48:26.278866 containerd[1891]: time="2026-03-07T00:48:26.278822129Z" level=info msg="connecting to shim 36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440" address="unix:///run/containerd/s/59831e72965c203c9aec959295957dd97b5db64479c96d6d20b806995109f6eb" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:48:26.305622 systemd[1]: Started cri-containerd-36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440.scope - libcontainer container 36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440. Mar 7 00:48:26.341506 containerd[1891]: time="2026-03-07T00:48:26.341465538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f54844f5c-wxlm6,Uid:d97a2fa8-2ddc-413b-9031-67d2dd8a9c93,Namespace:calico-system,Attempt:0,} returns sandbox id \"36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440\"" Mar 7 00:48:26.954827 systemd-networkd[1477]: cali5736b838177: Gained IPv6LL Mar 7 00:48:26.955603 systemd-networkd[1477]: cali69dd8b6d3b1: Gained IPv6LL Mar 7 00:48:27.595829 systemd-networkd[1477]: cali2e5f35eefe3: Gained IPv6LL Mar 7 00:48:27.795733 containerd[1891]: time="2026-03-07T00:48:27.795385934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:27.798179 containerd[1891]: time="2026-03-07T00:48:27.798122279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 7 00:48:27.801528 containerd[1891]: time="2026-03-07T00:48:27.801468324Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:27.805580 containerd[1891]: time="2026-03-07T00:48:27.805529008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:27.806266 containerd[1891]: time="2026-03-07T00:48:27.806084035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.515399352s" Mar 7 00:48:27.806266 containerd[1891]: time="2026-03-07T00:48:27.806112316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 00:48:27.808777 containerd[1891]: time="2026-03-07T00:48:27.808749321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 00:48:27.814831 containerd[1891]: time="2026-03-07T00:48:27.814797294Z" level=info msg="CreateContainer within sandbox \"718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:48:27.833329 containerd[1891]: time="2026-03-07T00:48:27.833285897Z" level=info msg="Container 6bf1407ada9aeed1ce5dfd2d09fbd9e88ea4d3ed6d338b3e555755f6b85b6e1d: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:27.857752 containerd[1891]: time="2026-03-07T00:48:27.857610361Z" level=info msg="CreateContainer within sandbox \"718aeffab997d7294327498181462d453e3e3502e6e6b1049e2dc2e0b077b904\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6bf1407ada9aeed1ce5dfd2d09fbd9e88ea4d3ed6d338b3e555755f6b85b6e1d\"" Mar 7 00:48:27.858913 containerd[1891]: time="2026-03-07T00:48:27.858529079Z" level=info msg="StartContainer for \"6bf1407ada9aeed1ce5dfd2d09fbd9e88ea4d3ed6d338b3e555755f6b85b6e1d\"" Mar 7 00:48:27.859832 containerd[1891]: time="2026-03-07T00:48:27.859810105Z" level=info msg="connecting to shim 6bf1407ada9aeed1ce5dfd2d09fbd9e88ea4d3ed6d338b3e555755f6b85b6e1d" address="unix:///run/containerd/s/12457223f662d4e2dd3bc0f3395af93cd6542de4e9ea13d5c9ac0464ed9a93f7" protocol=ttrpc version=3 Mar 7 00:48:27.881813 systemd[1]: Started cri-containerd-6bf1407ada9aeed1ce5dfd2d09fbd9e88ea4d3ed6d338b3e555755f6b85b6e1d.scope - libcontainer container 6bf1407ada9aeed1ce5dfd2d09fbd9e88ea4d3ed6d338b3e555755f6b85b6e1d. Mar 7 00:48:27.918959 containerd[1891]: time="2026-03-07T00:48:27.918825291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p5trg,Uid:24211977-da64-4da8-9d07-8dae38677b33,Namespace:kube-system,Attempt:0,}" Mar 7 00:48:27.919340 containerd[1891]: time="2026-03-07T00:48:27.919284258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nrttm,Uid:86842823-6ca2-4c01-b911-c13ec76466f9,Namespace:calico-system,Attempt:0,}" Mar 7 00:48:27.919588 containerd[1891]: time="2026-03-07T00:48:27.919522482Z" level=info msg="StartContainer for \"6bf1407ada9aeed1ce5dfd2d09fbd9e88ea4d3ed6d338b3e555755f6b85b6e1d\" returns successfully" Mar 7 00:48:28.097450 systemd-networkd[1477]: cali1b23864541b: Link UP Mar 7 00:48:28.098355 systemd-networkd[1477]: cali1b23864541b: Gained carrier Mar 7 00:48:28.107980 systemd-networkd[1477]: cali2b59b95856d: Gained IPv6LL Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.007 [INFO][5546] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0 coredns-674b8bbfcf- kube-system 24211977-da64-4da8-9d07-8dae38677b33 839 0 2026-03-07 00:47:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.3-n-9877c76adf coredns-674b8bbfcf-p5trg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1b23864541b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" Namespace="kube-system" Pod="coredns-674b8bbfcf-p5trg" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.007 [INFO][5546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" Namespace="kube-system" Pod="coredns-674b8bbfcf-p5trg" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.046 [INFO][5577] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" HandleID="k8s-pod-network.33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" Workload="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.054 [INFO][5577] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" HandleID="k8s-pod-network.33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" Workload="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb4c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.3-n-9877c76adf", "pod":"coredns-674b8bbfcf-p5trg", "timestamp":"2026-03-07 00:48:28.046437753 +0000 UTC"}, Hostname:"ci-4459.2.3-n-9877c76adf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400036f600)} Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.054 [INFO][5577] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.054 [INFO][5577] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.054 [INFO][5577] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-9877c76adf' Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.056 [INFO][5577] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.066 [INFO][5577] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.070 [INFO][5577] ipam/ipam.go 526: Trying affinity for 192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.071 [INFO][5577] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.073 [INFO][5577] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.073 [INFO][5577] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.074 [INFO][5577] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08 Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.082 [INFO][5577] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.090 [INFO][5577] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.7/26] block=192.168.4.0/26 handle="k8s-pod-network.33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.090 [INFO][5577] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.7/26] handle="k8s-pod-network.33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.090 [INFO][5577] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:48:28.118537 containerd[1891]: 2026-03-07 00:48:28.090 [INFO][5577] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.7/26] IPv6=[] ContainerID="33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" HandleID="k8s-pod-network.33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" Workload="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0" Mar 7 00:48:28.118977 containerd[1891]: 2026-03-07 00:48:28.092 [INFO][5546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" Namespace="kube-system" Pod="coredns-674b8bbfcf-p5trg" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"24211977-da64-4da8-9d07-8dae38677b33", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"", Pod:"coredns-674b8bbfcf-p5trg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b23864541b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:28.118977 containerd[1891]: 2026-03-07 00:48:28.093 [INFO][5546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.7/32] ContainerID="33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" Namespace="kube-system" Pod="coredns-674b8bbfcf-p5trg" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0" Mar 7 00:48:28.118977 containerd[1891]: 2026-03-07 00:48:28.093 [INFO][5546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b23864541b ContainerID="33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" Namespace="kube-system" Pod="coredns-674b8bbfcf-p5trg" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0" Mar 7 00:48:28.118977 containerd[1891]: 2026-03-07 00:48:28.097 [INFO][5546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" Namespace="kube-system" Pod="coredns-674b8bbfcf-p5trg" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0" Mar 7 00:48:28.118977 containerd[1891]: 2026-03-07 00:48:28.098 [INFO][5546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" Namespace="kube-system" Pod="coredns-674b8bbfcf-p5trg" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"24211977-da64-4da8-9d07-8dae38677b33", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08", Pod:"coredns-674b8bbfcf-p5trg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b23864541b", MAC:"8e:72:ea:ac:18:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:28.118977 containerd[1891]: 2026-03-07 00:48:28.115 [INFO][5546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" Namespace="kube-system" Pod="coredns-674b8bbfcf-p5trg" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-coredns--674b8bbfcf--p5trg-eth0" Mar 7 00:48:28.178436 containerd[1891]: time="2026-03-07T00:48:28.178305505Z" level=info msg="connecting to shim 33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08" address="unix:///run/containerd/s/2f37165cf1d46d47da02964889ed30dfcefdd5a18bcd61408524409def6f4dff" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:48:28.217993 systemd[1]: Started cri-containerd-33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08.scope - libcontainer container 33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08. Mar 7 00:48:28.238937 systemd-networkd[1477]: califd9f9772568: Link UP Mar 7 00:48:28.240625 systemd-networkd[1477]: califd9f9772568: Gained carrier Mar 7 00:48:28.256273 kubelet[3511]: I0307 00:48:28.256211 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f54844f5c-cq7pk" podStartSLOduration=36.739144355 podStartE2EDuration="39.25619609s" podCreationTimestamp="2026-03-07 00:47:49 +0000 UTC" firstStartedPulling="2026-03-07 00:48:25.289805045 +0000 UTC m=+52.456317409" lastFinishedPulling="2026-03-07 00:48:27.80685678 +0000 UTC m=+54.973369144" observedRunningTime="2026-03-07 00:48:28.136709413 +0000 UTC m=+55.303221777" watchObservedRunningTime="2026-03-07 00:48:28.25619609 +0000 UTC m=+55.422708454" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.014 [INFO][5558] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0 goldmane-5b85766d88- calico-system 86842823-6ca2-4c01-b911-c13ec76466f9 846 0 2026-03-07 00:47:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.3-n-9877c76adf goldmane-5b85766d88-nrttm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califd9f9772568 [] [] }} ContainerID="7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" Namespace="calico-system" Pod="goldmane-5b85766d88-nrttm" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.014 [INFO][5558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" Namespace="calico-system" Pod="goldmane-5b85766d88-nrttm" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.055 [INFO][5582] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" HandleID="k8s-pod-network.7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" Workload="ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.066 [INFO][5582] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" HandleID="k8s-pod-network.7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" Workload="ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbdd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-n-9877c76adf", "pod":"goldmane-5b85766d88-nrttm", "timestamp":"2026-03-07 00:48:28.055260544 +0000 UTC"}, Hostname:"ci-4459.2.3-n-9877c76adf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400028fb80)} Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.066 [INFO][5582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.091 [INFO][5582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.091 [INFO][5582] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-9877c76adf' Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.157 [INFO][5582] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.171 [INFO][5582] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.181 [INFO][5582] ipam/ipam.go 526: Trying affinity for 192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.184 [INFO][5582] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.189 [INFO][5582] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.0/26 host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.189 [INFO][5582] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.0/26 handle="k8s-pod-network.7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.195 [INFO][5582] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.205 [INFO][5582] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.0/26 handle="k8s-pod-network.7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.217 [INFO][5582] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.8/26] block=192.168.4.0/26 handle="k8s-pod-network.7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.217 [INFO][5582] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.8/26] handle="k8s-pod-network.7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" host="ci-4459.2.3-n-9877c76adf" Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.217 [INFO][5582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:48:28.260176 containerd[1891]: 2026-03-07 00:48:28.217 [INFO][5582] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.8/26] IPv6=[] ContainerID="7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" HandleID="k8s-pod-network.7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" Workload="ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0" Mar 7 00:48:28.260984 containerd[1891]: 2026-03-07 00:48:28.223 [INFO][5558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" Namespace="calico-system" Pod="goldmane-5b85766d88-nrttm" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"86842823-6ca2-4c01-b911-c13ec76466f9", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"", Pod:"goldmane-5b85766d88-nrttm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd9f9772568", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:28.260984 containerd[1891]: 2026-03-07 00:48:28.223 [INFO][5558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.8/32] ContainerID="7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" Namespace="calico-system" Pod="goldmane-5b85766d88-nrttm" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0" Mar 7 00:48:28.260984 containerd[1891]: 2026-03-07 00:48:28.223 [INFO][5558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd9f9772568 ContainerID="7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" Namespace="calico-system" Pod="goldmane-5b85766d88-nrttm" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0" Mar 7 00:48:28.260984 containerd[1891]: 2026-03-07 00:48:28.242 [INFO][5558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" Namespace="calico-system" Pod="goldmane-5b85766d88-nrttm" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0" Mar 7 00:48:28.260984 containerd[1891]: 2026-03-07 00:48:28.243 [INFO][5558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" Namespace="calico-system" Pod="goldmane-5b85766d88-nrttm" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"86842823-6ca2-4c01-b911-c13ec76466f9", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 47, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-9877c76adf", ContainerID:"7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e", Pod:"goldmane-5b85766d88-nrttm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd9f9772568", MAC:"aa:53:35:64:7e:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:48:28.260984 containerd[1891]: 2026-03-07 00:48:28.256 [INFO][5558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" Namespace="calico-system" Pod="goldmane-5b85766d88-nrttm" WorkloadEndpoint="ci--4459.2.3--n--9877c76adf-k8s-goldmane--5b85766d88--nrttm-eth0" Mar 7 00:48:28.281220 containerd[1891]: time="2026-03-07T00:48:28.281182056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p5trg,Uid:24211977-da64-4da8-9d07-8dae38677b33,Namespace:kube-system,Attempt:0,} returns sandbox id \"33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08\"" Mar 7 00:48:28.290721 containerd[1891]: time="2026-03-07T00:48:28.290286369Z" level=info msg="CreateContainer within sandbox \"33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:48:28.323499 containerd[1891]: time="2026-03-07T00:48:28.323454113Z" level=info msg="connecting to shim 7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e" address="unix:///run/containerd/s/117637a8c44aa74b967d67884bd33446a7c225096adcb7008d5f45529e386abb" namespace=k8s.io protocol=ttrpc version=3 Mar 7 00:48:28.328797 containerd[1891]: time="2026-03-07T00:48:28.328334976Z" level=info msg="Container d11f0b8da9a698e3b5683c5663dc266c98f4b9bcf8040f836db5def50acb6488: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:28.346357 containerd[1891]: time="2026-03-07T00:48:28.346303746Z" level=info msg="CreateContainer within sandbox \"33d9a4e8f36c1c6a7c6afb21d1f643c3d7ae042f0bfd9b8bf8154cbc37668e08\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d11f0b8da9a698e3b5683c5663dc266c98f4b9bcf8040f836db5def50acb6488\"" Mar 7 00:48:28.349259 containerd[1891]: time="2026-03-07T00:48:28.349181743Z" level=info msg="StartContainer for \"d11f0b8da9a698e3b5683c5663dc266c98f4b9bcf8040f836db5def50acb6488\"" Mar 7 00:48:28.352912 containerd[1891]: time="2026-03-07T00:48:28.352877992Z" level=info msg="connecting to shim d11f0b8da9a698e3b5683c5663dc266c98f4b9bcf8040f836db5def50acb6488" address="unix:///run/containerd/s/2f37165cf1d46d47da02964889ed30dfcefdd5a18bcd61408524409def6f4dff" protocol=ttrpc version=3 Mar 7 00:48:28.360849 systemd[1]: Started cri-containerd-7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e.scope - libcontainer container 7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e. Mar 7 00:48:28.380268 systemd[1]: Started cri-containerd-d11f0b8da9a698e3b5683c5663dc266c98f4b9bcf8040f836db5def50acb6488.scope - libcontainer container d11f0b8da9a698e3b5683c5663dc266c98f4b9bcf8040f836db5def50acb6488. Mar 7 00:48:28.429032 containerd[1891]: time="2026-03-07T00:48:28.428990911Z" level=info msg="StartContainer for \"d11f0b8da9a698e3b5683c5663dc266c98f4b9bcf8040f836db5def50acb6488\" returns successfully" Mar 7 00:48:28.447573 containerd[1891]: time="2026-03-07T00:48:28.447508971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nrttm,Uid:86842823-6ca2-4c01-b911-c13ec76466f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e\"" Mar 7 00:48:28.934105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028345608.mount: Deactivated successfully. Mar 7 00:48:29.114890 kubelet[3511]: I0307 00:48:29.114856 3511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:48:29.149313 kubelet[3511]: I0307 00:48:29.149154 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p5trg" podStartSLOduration=49.149122963 podStartE2EDuration="49.149122963s" podCreationTimestamp="2026-03-07 00:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:48:29.130243244 +0000 UTC m=+56.296755608" watchObservedRunningTime="2026-03-07 00:48:29.149122963 +0000 UTC m=+56.315635327" Mar 7 00:48:30.026924 systemd-networkd[1477]: cali1b23864541b: Gained IPv6LL Mar 7 00:48:30.027237 systemd-networkd[1477]: califd9f9772568: Gained IPv6LL Mar 7 00:48:31.884134 containerd[1891]: time="2026-03-07T00:48:31.883552397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:31.888945 containerd[1891]: time="2026-03-07T00:48:31.888914220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 7 00:48:31.892559 containerd[1891]: time="2026-03-07T00:48:31.892494464Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:31.896220 containerd[1891]: time="2026-03-07T00:48:31.896192953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:31.896534 containerd[1891]: time="2026-03-07T00:48:31.896506227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 4.087728489s" Mar 7 00:48:31.896534 containerd[1891]: time="2026-03-07T00:48:31.896535004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 7 00:48:31.897727 containerd[1891]: time="2026-03-07T00:48:31.897703954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 00:48:31.916679 containerd[1891]: time="2026-03-07T00:48:31.916619938Z" level=info msg="CreateContainer within sandbox \"1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 00:48:31.932888 containerd[1891]: time="2026-03-07T00:48:31.932798049Z" level=info msg="Container 58da91274b6476bd5d387ec4e89a8ceda43a70b35ee74fdf3646e4cc764676e3: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:31.948026 containerd[1891]: time="2026-03-07T00:48:31.947910877Z" level=info msg="CreateContainer within sandbox \"1b42eb8af90c0212ed163cecec9fab422b9cdbc3a81468f26dcbd3717643975d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"58da91274b6476bd5d387ec4e89a8ceda43a70b35ee74fdf3646e4cc764676e3\"" Mar 7 00:48:31.949148 containerd[1891]: time="2026-03-07T00:48:31.949120229Z" level=info msg="StartContainer for \"58da91274b6476bd5d387ec4e89a8ceda43a70b35ee74fdf3646e4cc764676e3\"" Mar 7 00:48:31.950284 containerd[1891]: time="2026-03-07T00:48:31.950253082Z" level=info msg="connecting to shim 58da91274b6476bd5d387ec4e89a8ceda43a70b35ee74fdf3646e4cc764676e3" address="unix:///run/containerd/s/2b6296b8b471c04f01b0a9cf82c99c05da70ca221197ea42a28b2c74e53df2aa" protocol=ttrpc version=3 Mar 7 00:48:31.968792 systemd[1]: Started cri-containerd-58da91274b6476bd5d387ec4e89a8ceda43a70b35ee74fdf3646e4cc764676e3.scope - libcontainer container 58da91274b6476bd5d387ec4e89a8ceda43a70b35ee74fdf3646e4cc764676e3. Mar 7 00:48:32.012207 containerd[1891]: time="2026-03-07T00:48:32.012161035Z" level=info msg="StartContainer for \"58da91274b6476bd5d387ec4e89a8ceda43a70b35ee74fdf3646e4cc764676e3\" returns successfully" Mar 7 00:48:32.181954 kubelet[3511]: I0307 00:48:32.181721 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55cd6f7bcf-v49lv" podStartSLOduration=34.79090657 podStartE2EDuration="41.181701126s" podCreationTimestamp="2026-03-07 00:47:51 +0000 UTC" firstStartedPulling="2026-03-07 00:48:25.506752345 +0000 UTC m=+52.673264709" lastFinishedPulling="2026-03-07 00:48:31.897546893 +0000 UTC m=+59.064059265" observedRunningTime="2026-03-07 00:48:32.144926656 +0000 UTC m=+59.311439020" watchObservedRunningTime="2026-03-07 00:48:32.181701126 +0000 UTC m=+59.348213490" Mar 7 00:48:33.442279 containerd[1891]: time="2026-03-07T00:48:33.441741907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:33.444683 containerd[1891]: time="2026-03-07T00:48:33.444632265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 7 00:48:33.448003 containerd[1891]: time="2026-03-07T00:48:33.447972549Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:33.452747 containerd[1891]: time="2026-03-07T00:48:33.452722103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:33.453193 containerd[1891]: time="2026-03-07T00:48:33.452999336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.555270886s" Mar 7 00:48:33.453193 containerd[1891]: time="2026-03-07T00:48:33.453026937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 7 00:48:33.454198 containerd[1891]: time="2026-03-07T00:48:33.454014281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 00:48:33.461054 containerd[1891]: time="2026-03-07T00:48:33.460869696Z" level=info msg="CreateContainer within sandbox \"f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 00:48:33.484586 containerd[1891]: time="2026-03-07T00:48:33.484547455Z" level=info msg="Container 1576f5ccabe7d773aca06f8c7f3ae4a3a4923220b2bff2a3f5523df98a668286: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:33.487534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4005504294.mount: Deactivated successfully. Mar 7 00:48:33.507145 containerd[1891]: time="2026-03-07T00:48:33.507101003Z" level=info msg="CreateContainer within sandbox \"f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1576f5ccabe7d773aca06f8c7f3ae4a3a4923220b2bff2a3f5523df98a668286\"" Mar 7 00:48:33.507998 containerd[1891]: time="2026-03-07T00:48:33.507834210Z" level=info msg="StartContainer for \"1576f5ccabe7d773aca06f8c7f3ae4a3a4923220b2bff2a3f5523df98a668286\"" Mar 7 00:48:33.509294 containerd[1891]: time="2026-03-07T00:48:33.509273561Z" level=info msg="connecting to shim 1576f5ccabe7d773aca06f8c7f3ae4a3a4923220b2bff2a3f5523df98a668286" address="unix:///run/containerd/s/db8023b4510d031a4f4c253ed552ba81c0f4f70af10ea08b1859c3e204ae8598" protocol=ttrpc version=3 Mar 7 00:48:33.528813 systemd[1]: Started cri-containerd-1576f5ccabe7d773aca06f8c7f3ae4a3a4923220b2bff2a3f5523df98a668286.scope - libcontainer container 1576f5ccabe7d773aca06f8c7f3ae4a3a4923220b2bff2a3f5523df98a668286. Mar 7 00:48:33.585892 containerd[1891]: time="2026-03-07T00:48:33.585786890Z" level=info msg="StartContainer for \"1576f5ccabe7d773aca06f8c7f3ae4a3a4923220b2bff2a3f5523df98a668286\" returns successfully" Mar 7 00:48:33.835782 containerd[1891]: time="2026-03-07T00:48:33.835180618Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:33.837819 containerd[1891]: time="2026-03-07T00:48:33.837793926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 00:48:33.839201 containerd[1891]: time="2026-03-07T00:48:33.839179011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 384.897729ms" Mar 7 00:48:33.839311 containerd[1891]: time="2026-03-07T00:48:33.839298703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 00:48:33.840492 containerd[1891]: time="2026-03-07T00:48:33.840310288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 00:48:33.846428 containerd[1891]: time="2026-03-07T00:48:33.846407974Z" level=info msg="CreateContainer within sandbox \"36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:48:33.866229 containerd[1891]: time="2026-03-07T00:48:33.866171286Z" level=info msg="Container e6665a344470635c365593246fdbb8ff19f1a73255a00d2387724801fab660bd: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:33.870363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45706971.mount: Deactivated successfully. Mar 7 00:48:33.885800 containerd[1891]: time="2026-03-07T00:48:33.885729433Z" level=info msg="CreateContainer within sandbox \"36cc9f528811507cb64ca7e4006df248b0230cba0adb0b27be64404dc79ee440\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e6665a344470635c365593246fdbb8ff19f1a73255a00d2387724801fab660bd\"" Mar 7 00:48:33.887212 containerd[1891]: time="2026-03-07T00:48:33.886636414Z" level=info msg="StartContainer for \"e6665a344470635c365593246fdbb8ff19f1a73255a00d2387724801fab660bd\"" Mar 7 00:48:33.888069 containerd[1891]: time="2026-03-07T00:48:33.888035283Z" level=info msg="connecting to shim e6665a344470635c365593246fdbb8ff19f1a73255a00d2387724801fab660bd" address="unix:///run/containerd/s/59831e72965c203c9aec959295957dd97b5db64479c96d6d20b806995109f6eb" protocol=ttrpc version=3 Mar 7 00:48:33.905810 systemd[1]: Started cri-containerd-e6665a344470635c365593246fdbb8ff19f1a73255a00d2387724801fab660bd.scope - libcontainer container e6665a344470635c365593246fdbb8ff19f1a73255a00d2387724801fab660bd. Mar 7 00:48:33.946951 containerd[1891]: time="2026-03-07T00:48:33.946836478Z" level=info msg="StartContainer for \"e6665a344470635c365593246fdbb8ff19f1a73255a00d2387724801fab660bd\" returns successfully" Mar 7 00:48:34.146737 kubelet[3511]: I0307 00:48:34.146685 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f54844f5c-wxlm6" podStartSLOduration=37.649514151 podStartE2EDuration="45.146645486s" podCreationTimestamp="2026-03-07 00:47:49 +0000 UTC" firstStartedPulling="2026-03-07 00:48:26.342959234 +0000 UTC m=+53.509471598" lastFinishedPulling="2026-03-07 00:48:33.840090561 +0000 UTC m=+61.006602933" observedRunningTime="2026-03-07 00:48:34.146297898 +0000 UTC m=+61.312810262" watchObservedRunningTime="2026-03-07 00:48:34.146645486 +0000 UTC m=+61.313157874" Mar 7 00:48:36.692303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2208263153.mount: Deactivated successfully. Mar 7 00:48:37.185743 containerd[1891]: time="2026-03-07T00:48:37.185689867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:37.188302 containerd[1891]: time="2026-03-07T00:48:37.188264982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 7 00:48:37.190935 containerd[1891]: time="2026-03-07T00:48:37.190889171Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:37.195977 containerd[1891]: time="2026-03-07T00:48:37.195629405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:37.195977 containerd[1891]: time="2026-03-07T00:48:37.195877509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.35554066s" Mar 7 00:48:37.195977 containerd[1891]: time="2026-03-07T00:48:37.195902758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 7 00:48:37.196817 containerd[1891]: time="2026-03-07T00:48:37.196797755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 00:48:37.203134 containerd[1891]: time="2026-03-07T00:48:37.203103575Z" level=info msg="CreateContainer within sandbox \"7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 00:48:37.223173 containerd[1891]: time="2026-03-07T00:48:37.223126161Z" level=info msg="Container 1b27a76ba1ffb3cf36624f38196a94cf57032fd943f5332058db59b9893b38d3: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:37.245163 containerd[1891]: time="2026-03-07T00:48:37.245113154Z" level=info msg="CreateContainer within sandbox \"7d64ac8f186a5f611548a72f47b68b7443f238f9c7e431e2c10c0509aae5037e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1b27a76ba1ffb3cf36624f38196a94cf57032fd943f5332058db59b9893b38d3\"" Mar 7 00:48:37.245925 containerd[1891]: time="2026-03-07T00:48:37.245763655Z" level=info msg="StartContainer for \"1b27a76ba1ffb3cf36624f38196a94cf57032fd943f5332058db59b9893b38d3\"" Mar 7 00:48:37.247525 containerd[1891]: time="2026-03-07T00:48:37.247501583Z" level=info msg="connecting to shim 1b27a76ba1ffb3cf36624f38196a94cf57032fd943f5332058db59b9893b38d3" address="unix:///run/containerd/s/117637a8c44aa74b967d67884bd33446a7c225096adcb7008d5f45529e386abb" protocol=ttrpc version=3 Mar 7 00:48:37.266828 systemd[1]: Started cri-containerd-1b27a76ba1ffb3cf36624f38196a94cf57032fd943f5332058db59b9893b38d3.scope - libcontainer container 1b27a76ba1ffb3cf36624f38196a94cf57032fd943f5332058db59b9893b38d3. Mar 7 00:48:37.306565 containerd[1891]: time="2026-03-07T00:48:37.306529849Z" level=info msg="StartContainer for \"1b27a76ba1ffb3cf36624f38196a94cf57032fd943f5332058db59b9893b38d3\" returns successfully" Mar 7 00:48:38.159695 kubelet[3511]: I0307 00:48:38.159167 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-nrttm" podStartSLOduration=39.412367611 podStartE2EDuration="48.159154594s" podCreationTimestamp="2026-03-07 00:47:50 +0000 UTC" firstStartedPulling="2026-03-07 00:48:28.450029381 +0000 UTC m=+55.616541745" lastFinishedPulling="2026-03-07 00:48:37.196816356 +0000 UTC m=+64.363328728" observedRunningTime="2026-03-07 00:48:38.158619217 +0000 UTC m=+65.325131589" watchObservedRunningTime="2026-03-07 00:48:38.159154594 +0000 UTC m=+65.325667054" Mar 7 00:48:38.797429 containerd[1891]: time="2026-03-07T00:48:38.797375114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:38.800929 containerd[1891]: time="2026-03-07T00:48:38.800892124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 7 00:48:38.804208 containerd[1891]: time="2026-03-07T00:48:38.804167486Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:38.814123 containerd[1891]: time="2026-03-07T00:48:38.814072655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:48:38.814573 containerd[1891]: time="2026-03-07T00:48:38.814544014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.617636648s" Mar 7 00:48:38.814610 containerd[1891]: time="2026-03-07T00:48:38.814576567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 7 00:48:38.828401 containerd[1891]: time="2026-03-07T00:48:38.828342174Z" level=info msg="CreateContainer within sandbox \"f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 00:48:38.852384 containerd[1891]: time="2026-03-07T00:48:38.852331512Z" level=info msg="Container 76fc5170b46b0977b3a9d78289c4621c1527e699ea06527bade44630470534d5: CDI devices from CRI Config.CDIDevices: []" Mar 7 00:48:38.878765 containerd[1891]: time="2026-03-07T00:48:38.878716711Z" level=info msg="CreateContainer within sandbox \"f8ce3323ef162644a03525b1d1d240c103f93ef07f0bf6e0e9483b781de874ce\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"76fc5170b46b0977b3a9d78289c4621c1527e699ea06527bade44630470534d5\"" Mar 7 00:48:38.880693 containerd[1891]: time="2026-03-07T00:48:38.880088764Z" level=info msg="StartContainer for \"76fc5170b46b0977b3a9d78289c4621c1527e699ea06527bade44630470534d5\"" Mar 7 00:48:38.882144 containerd[1891]: time="2026-03-07T00:48:38.882117950Z" level=info msg="connecting to shim 76fc5170b46b0977b3a9d78289c4621c1527e699ea06527bade44630470534d5" address="unix:///run/containerd/s/db8023b4510d031a4f4c253ed552ba81c0f4f70af10ea08b1859c3e204ae8598" protocol=ttrpc version=3 Mar 7 00:48:38.908879 systemd[1]: Started cri-containerd-76fc5170b46b0977b3a9d78289c4621c1527e699ea06527bade44630470534d5.scope - libcontainer container 76fc5170b46b0977b3a9d78289c4621c1527e699ea06527bade44630470534d5. Mar 7 00:48:38.984147 containerd[1891]: time="2026-03-07T00:48:38.984108561Z" level=info msg="StartContainer for \"76fc5170b46b0977b3a9d78289c4621c1527e699ea06527bade44630470534d5\" returns successfully" Mar 7 00:48:39.022092 kubelet[3511]: I0307 00:48:39.021970 3511 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 00:48:39.022092 kubelet[3511]: I0307 00:48:39.022009 3511 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 00:48:39.167509 kubelet[3511]: I0307 00:48:39.167285 3511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7cs64" podStartSLOduration=36.5753272 podStartE2EDuration="49.167272509s" podCreationTimestamp="2026-03-07 00:47:50 +0000 UTC" firstStartedPulling="2026-03-07 00:48:26.223306544 +0000 UTC m=+53.389818908" lastFinishedPulling="2026-03-07 00:48:38.815251853 +0000 UTC m=+65.981764217" observedRunningTime="2026-03-07 00:48:39.167128224 +0000 UTC m=+66.333640596" watchObservedRunningTime="2026-03-07 00:48:39.167272509 +0000 UTC m=+66.333784873" Mar 7 00:49:06.115753 kubelet[3511]: I0307 00:49:06.114905 3511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:49:15.727921 systemd[1]: Started sshd@7-10.200.20.17:22-10.200.16.10:48836.service - OpenSSH per-connection server daemon (10.200.16.10:48836). Mar 7 00:49:16.161127 sshd[6255]: Accepted publickey for core from 10.200.16.10 port 48836 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:16.165696 sshd-session[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:16.173394 systemd-logind[1870]: New session 10 of user core. Mar 7 00:49:16.178808 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 00:49:16.460878 sshd[6258]: Connection closed by 10.200.16.10 port 48836 Mar 7 00:49:16.461874 sshd-session[6255]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:16.468506 systemd[1]: sshd@7-10.200.20.17:22-10.200.16.10:48836.service: Deactivated successfully. Mar 7 00:49:16.472234 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 00:49:16.475288 systemd-logind[1870]: Session 10 logged out. Waiting for processes to exit. Mar 7 00:49:16.479210 systemd-logind[1870]: Removed session 10. Mar 7 00:49:21.562191 systemd[1]: Started sshd@8-10.200.20.17:22-10.200.16.10:33068.service - OpenSSH per-connection server daemon (10.200.16.10:33068). Mar 7 00:49:21.985165 sshd[6298]: Accepted publickey for core from 10.200.16.10 port 33068 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:21.987138 sshd-session[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:21.991286 systemd-logind[1870]: New session 11 of user core. Mar 7 00:49:21.995865 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 00:49:22.264616 sshd[6301]: Connection closed by 10.200.16.10 port 33068 Mar 7 00:49:22.264430 sshd-session[6298]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:22.268362 systemd[1]: sshd@8-10.200.20.17:22-10.200.16.10:33068.service: Deactivated successfully. Mar 7 00:49:22.270716 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 00:49:22.272776 systemd-logind[1870]: Session 11 logged out. Waiting for processes to exit. Mar 7 00:49:22.274879 systemd-logind[1870]: Removed session 11. Mar 7 00:49:27.354036 systemd[1]: Started sshd@9-10.200.20.17:22-10.200.16.10:33076.service - OpenSSH per-connection server daemon (10.200.16.10:33076). Mar 7 00:49:27.783397 sshd[6314]: Accepted publickey for core from 10.200.16.10 port 33076 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:27.784628 sshd-session[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:27.788542 systemd-logind[1870]: New session 12 of user core. Mar 7 00:49:27.795964 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 00:49:28.070780 sshd[6317]: Connection closed by 10.200.16.10 port 33076 Mar 7 00:49:28.070637 sshd-session[6314]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:28.075370 systemd[1]: sshd@9-10.200.20.17:22-10.200.16.10:33076.service: Deactivated successfully. Mar 7 00:49:28.077514 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 00:49:28.078510 systemd-logind[1870]: Session 12 logged out. Waiting for processes to exit. Mar 7 00:49:28.080404 systemd-logind[1870]: Removed session 12. Mar 7 00:49:33.159928 systemd[1]: Started sshd@10-10.200.20.17:22-10.200.16.10:44874.service - OpenSSH per-connection server daemon (10.200.16.10:44874). Mar 7 00:49:33.583646 sshd[6371]: Accepted publickey for core from 10.200.16.10 port 44874 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:33.585123 sshd-session[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:33.589588 systemd-logind[1870]: New session 13 of user core. Mar 7 00:49:33.595848 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 00:49:33.871215 sshd[6374]: Connection closed by 10.200.16.10 port 44874 Mar 7 00:49:33.871778 sshd-session[6371]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:33.876232 systemd[1]: sshd@10-10.200.20.17:22-10.200.16.10:44874.service: Deactivated successfully. Mar 7 00:49:33.878085 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 00:49:33.879111 systemd-logind[1870]: Session 13 logged out. Waiting for processes to exit. Mar 7 00:49:33.880506 systemd-logind[1870]: Removed session 13. Mar 7 00:49:33.979075 systemd[1]: Started sshd@11-10.200.20.17:22-10.200.16.10:44890.service - OpenSSH per-connection server daemon (10.200.16.10:44890). Mar 7 00:49:34.409064 sshd[6387]: Accepted publickey for core from 10.200.16.10 port 44890 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:34.410763 sshd-session[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:34.414572 systemd-logind[1870]: New session 14 of user core. Mar 7 00:49:34.424065 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 00:49:34.721790 sshd[6390]: Connection closed by 10.200.16.10 port 44890 Mar 7 00:49:34.722379 sshd-session[6387]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:34.726286 systemd-logind[1870]: Session 14 logged out. Waiting for processes to exit. Mar 7 00:49:34.726847 systemd[1]: sshd@11-10.200.20.17:22-10.200.16.10:44890.service: Deactivated successfully. Mar 7 00:49:34.730486 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 00:49:34.732893 systemd-logind[1870]: Removed session 14. Mar 7 00:49:34.811685 systemd[1]: Started sshd@12-10.200.20.17:22-10.200.16.10:44904.service - OpenSSH per-connection server daemon (10.200.16.10:44904). Mar 7 00:49:35.237085 sshd[6399]: Accepted publickey for core from 10.200.16.10 port 44904 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:35.238823 sshd-session[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:35.242737 systemd-logind[1870]: New session 15 of user core. Mar 7 00:49:35.251834 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 00:49:35.543764 sshd[6402]: Connection closed by 10.200.16.10 port 44904 Mar 7 00:49:35.544237 sshd-session[6399]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:35.548554 systemd[1]: sshd@12-10.200.20.17:22-10.200.16.10:44904.service: Deactivated successfully. Mar 7 00:49:35.551618 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 00:49:35.553189 systemd-logind[1870]: Session 15 logged out. Waiting for processes to exit. Mar 7 00:49:35.554800 systemd-logind[1870]: Removed session 15. Mar 7 00:49:40.635892 systemd[1]: Started sshd@13-10.200.20.17:22-10.200.16.10:43320.service - OpenSSH per-connection server daemon (10.200.16.10:43320). Mar 7 00:49:41.062224 sshd[6445]: Accepted publickey for core from 10.200.16.10 port 43320 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:41.063340 sshd-session[6445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:41.067719 systemd-logind[1870]: New session 16 of user core. Mar 7 00:49:41.073834 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 00:49:41.343312 sshd[6451]: Connection closed by 10.200.16.10 port 43320 Mar 7 00:49:41.344184 sshd-session[6445]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:41.347739 systemd[1]: sshd@13-10.200.20.17:22-10.200.16.10:43320.service: Deactivated successfully. Mar 7 00:49:41.349519 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 00:49:41.350337 systemd-logind[1870]: Session 16 logged out. Waiting for processes to exit. Mar 7 00:49:41.351536 systemd-logind[1870]: Removed session 16. Mar 7 00:49:41.436909 systemd[1]: Started sshd@14-10.200.20.17:22-10.200.16.10:43326.service - OpenSSH per-connection server daemon (10.200.16.10:43326). Mar 7 00:49:41.860186 sshd[6463]: Accepted publickey for core from 10.200.16.10 port 43326 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:41.862109 sshd-session[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:41.867931 systemd-logind[1870]: New session 17 of user core. Mar 7 00:49:41.873835 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 00:49:42.268690 sshd[6466]: Connection closed by 10.200.16.10 port 43326 Mar 7 00:49:42.269374 sshd-session[6463]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:42.273782 systemd[1]: sshd@14-10.200.20.17:22-10.200.16.10:43326.service: Deactivated successfully. Mar 7 00:49:42.278137 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 00:49:42.279238 systemd-logind[1870]: Session 17 logged out. Waiting for processes to exit. Mar 7 00:49:42.280976 systemd-logind[1870]: Removed session 17. Mar 7 00:49:42.358721 systemd[1]: Started sshd@15-10.200.20.17:22-10.200.16.10:43340.service - OpenSSH per-connection server daemon (10.200.16.10:43340). Mar 7 00:49:42.783119 sshd[6475]: Accepted publickey for core from 10.200.16.10 port 43340 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:42.784352 sshd-session[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:42.788120 systemd-logind[1870]: New session 18 of user core. Mar 7 00:49:42.792818 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 00:49:43.588892 sshd[6478]: Connection closed by 10.200.16.10 port 43340 Mar 7 00:49:43.589811 sshd-session[6475]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:43.593669 systemd[1]: sshd@15-10.200.20.17:22-10.200.16.10:43340.service: Deactivated successfully. Mar 7 00:49:43.597958 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 00:49:43.599920 systemd-logind[1870]: Session 18 logged out. Waiting for processes to exit. Mar 7 00:49:43.601455 systemd-logind[1870]: Removed session 18. Mar 7 00:49:43.679406 systemd[1]: Started sshd@16-10.200.20.17:22-10.200.16.10:43348.service - OpenSSH per-connection server daemon (10.200.16.10:43348). Mar 7 00:49:44.106466 sshd[6503]: Accepted publickey for core from 10.200.16.10 port 43348 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:44.107728 sshd-session[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:44.111679 systemd-logind[1870]: New session 19 of user core. Mar 7 00:49:44.116886 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 00:49:44.482809 sshd[6506]: Connection closed by 10.200.16.10 port 43348 Mar 7 00:49:44.485172 sshd-session[6503]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:44.488882 systemd[1]: sshd@16-10.200.20.17:22-10.200.16.10:43348.service: Deactivated successfully. Mar 7 00:49:44.491078 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 00:49:44.491878 systemd-logind[1870]: Session 19 logged out. Waiting for processes to exit. Mar 7 00:49:44.494719 systemd-logind[1870]: Removed session 19. Mar 7 00:49:44.573603 systemd[1]: Started sshd@17-10.200.20.17:22-10.200.16.10:43350.service - OpenSSH per-connection server daemon (10.200.16.10:43350). Mar 7 00:49:44.997894 sshd[6516]: Accepted publickey for core from 10.200.16.10 port 43350 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:44.999135 sshd-session[6516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:45.003195 systemd-logind[1870]: New session 20 of user core. Mar 7 00:49:45.009854 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 00:49:45.278435 sshd[6519]: Connection closed by 10.200.16.10 port 43350 Mar 7 00:49:45.279072 sshd-session[6516]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:45.282858 systemd-logind[1870]: Session 20 logged out. Waiting for processes to exit. Mar 7 00:49:45.283220 systemd[1]: sshd@17-10.200.20.17:22-10.200.16.10:43350.service: Deactivated successfully. Mar 7 00:49:45.285477 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 00:49:45.288287 systemd-logind[1870]: Removed session 20. Mar 7 00:49:50.370905 systemd[1]: Started sshd@18-10.200.20.17:22-10.200.16.10:37480.service - OpenSSH per-connection server daemon (10.200.16.10:37480). Mar 7 00:49:50.801339 sshd[6575]: Accepted publickey for core from 10.200.16.10 port 37480 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:50.816275 sshd-session[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:50.820740 systemd-logind[1870]: New session 21 of user core. Mar 7 00:49:50.827837 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 00:49:51.085818 sshd[6603]: Connection closed by 10.200.16.10 port 37480 Mar 7 00:49:51.086164 sshd-session[6575]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:51.090512 systemd[1]: sshd@18-10.200.20.17:22-10.200.16.10:37480.service: Deactivated successfully. Mar 7 00:49:51.093931 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 00:49:51.096331 systemd-logind[1870]: Session 21 logged out. Waiting for processes to exit. Mar 7 00:49:51.097431 systemd-logind[1870]: Removed session 21. Mar 7 00:49:56.179735 systemd[1]: Started sshd@19-10.200.20.17:22-10.200.16.10:37490.service - OpenSSH per-connection server daemon (10.200.16.10:37490). Mar 7 00:49:56.602792 sshd[6632]: Accepted publickey for core from 10.200.16.10 port 37490 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:49:56.603861 sshd-session[6632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:49:56.608038 systemd-logind[1870]: New session 22 of user core. Mar 7 00:49:56.614859 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 00:49:56.878909 sshd[6635]: Connection closed by 10.200.16.10 port 37490 Mar 7 00:49:56.879466 sshd-session[6632]: pam_unix(sshd:session): session closed for user core Mar 7 00:49:56.883695 systemd-logind[1870]: Session 22 logged out. Waiting for processes to exit. Mar 7 00:49:56.884017 systemd[1]: sshd@19-10.200.20.17:22-10.200.16.10:37490.service: Deactivated successfully. Mar 7 00:49:56.886216 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 00:49:56.888412 systemd-logind[1870]: Removed session 22. Mar 7 00:50:01.972501 systemd[1]: Started sshd@20-10.200.20.17:22-10.200.16.10:53666.service - OpenSSH per-connection server daemon (10.200.16.10:53666). Mar 7 00:50:02.396258 sshd[6647]: Accepted publickey for core from 10.200.16.10 port 53666 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:50:02.398018 sshd-session[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:50:02.401879 systemd-logind[1870]: New session 23 of user core. Mar 7 00:50:02.407800 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 00:50:02.677242 sshd[6672]: Connection closed by 10.200.16.10 port 53666 Mar 7 00:50:02.678900 sshd-session[6647]: pam_unix(sshd:session): session closed for user core Mar 7 00:50:02.682213 systemd[1]: sshd@20-10.200.20.17:22-10.200.16.10:53666.service: Deactivated successfully. Mar 7 00:50:02.685179 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 00:50:02.686334 systemd-logind[1870]: Session 23 logged out. Waiting for processes to exit. Mar 7 00:50:02.688493 systemd-logind[1870]: Removed session 23. Mar 7 00:50:07.768894 systemd[1]: Started sshd@21-10.200.20.17:22-10.200.16.10:53680.service - OpenSSH per-connection server daemon (10.200.16.10:53680). Mar 7 00:50:08.191691 sshd[6705]: Accepted publickey for core from 10.200.16.10 port 53680 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:50:08.192496 sshd-session[6705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:50:08.196744 systemd-logind[1870]: New session 24 of user core. Mar 7 00:50:08.201831 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 00:50:08.484844 sshd[6708]: Connection closed by 10.200.16.10 port 53680 Mar 7 00:50:08.484744 sshd-session[6705]: pam_unix(sshd:session): session closed for user core Mar 7 00:50:08.488893 systemd[1]: sshd@21-10.200.20.17:22-10.200.16.10:53680.service: Deactivated successfully. Mar 7 00:50:08.490557 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 00:50:08.491278 systemd-logind[1870]: Session 24 logged out. Waiting for processes to exit. Mar 7 00:50:08.492505 systemd-logind[1870]: Removed session 24. Mar 7 00:50:13.581461 systemd[1]: Started sshd@22-10.200.20.17:22-10.200.16.10:52174.service - OpenSSH per-connection server daemon (10.200.16.10:52174). Mar 7 00:50:14.014619 sshd[6745]: Accepted publickey for core from 10.200.16.10 port 52174 ssh2: RSA SHA256:JE8kgEbSicgM9iPPcpD9A3ndRLJ1370afumEFyydKJ0 Mar 7 00:50:14.015514 sshd-session[6745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:50:14.019639 systemd-logind[1870]: New session 25 of user core. Mar 7 00:50:14.031869 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 00:50:14.295760 sshd[6748]: Connection closed by 10.200.16.10 port 52174 Mar 7 00:50:14.294778 sshd-session[6745]: pam_unix(sshd:session): session closed for user core Mar 7 00:50:14.298967 systemd-logind[1870]: Session 25 logged out. Waiting for processes to exit. Mar 7 00:50:14.299889 systemd[1]: sshd@22-10.200.20.17:22-10.200.16.10:52174.service: Deactivated successfully. Mar 7 00:50:14.303466 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 00:50:14.304992 systemd-logind[1870]: Removed session 25.