Mar 4 00:50:04.201721 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 4 00:50:04.201744 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 3 22:54:15 -00 2026 Mar 4 00:50:04.201753 kernel: KASLR enabled Mar 4 00:50:04.201759 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 4 00:50:04.201766 kernel: printk: bootconsole [pl11] enabled Mar 4 00:50:04.201772 kernel: efi: EFI v2.7 by EDK II Mar 4 00:50:04.201780 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 4 00:50:04.201786 kernel: random: crng init done Mar 4 00:50:04.201792 kernel: ACPI: Early table checksum verification disabled Mar 4 00:50:04.201798 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 4 00:50:04.201805 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201811 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201818 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 4 00:50:04.201825 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201833 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201839 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201846 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201854 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201861 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201868 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 4 00:50:04.201874 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201881 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 4 00:50:04.201888 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 4 00:50:04.201895 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 4 00:50:04.201901 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 4 00:50:04.201908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 4 00:50:04.201914 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 4 00:50:04.201921 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 4 00:50:04.201929 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 4 00:50:04.201936 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 4 00:50:04.201943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 4 00:50:04.201950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 4 00:50:04.201956 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 4 00:50:04.201963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 4 00:50:04.201970 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 4 00:50:04.201976 kernel: Zone ranges: Mar 4 00:50:04.201983 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 4 00:50:04.201989 kernel: DMA32 empty Mar 4 00:50:04.201996 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:50:04.202003 kernel: Movable zone start for each node Mar 4 00:50:04.202014 kernel: Early memory node ranges Mar 4 00:50:04.202021 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 4 00:50:04.202028 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 4 00:50:04.202035 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 4 00:50:04.202042 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 4 00:50:04.202051 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 4 00:50:04.202058 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 4 00:50:04.202065 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:50:04.202073 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 4 00:50:04.202080 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 4 00:50:04.202087 kernel: psci: probing for conduit method from ACPI. Mar 4 00:50:04.202093 kernel: psci: PSCIv1.1 detected in firmware. Mar 4 00:50:04.202100 kernel: psci: Using standard PSCI v0.2 function IDs Mar 4 00:50:04.202107 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 4 00:50:04.202114 kernel: psci: SMC Calling Convention v1.4 Mar 4 00:50:04.202121 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 4 00:50:04.204172 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 4 00:50:04.204189 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 4 00:50:04.204197 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 4 00:50:04.204206 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 4 00:50:04.204213 kernel: Detected PIPT I-cache on CPU0 Mar 4 00:50:04.204222 kernel: CPU features: detected: GIC system register CPU interface Mar 4 00:50:04.204230 kernel: CPU features: detected: Hardware dirty bit management Mar 4 00:50:04.204238 kernel: CPU features: detected: Spectre-BHB Mar 4 00:50:04.204247 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 4 00:50:04.204254 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 4 00:50:04.204263 kernel: CPU features: detected: ARM erratum 1418040 Mar 4 00:50:04.204271 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 4 00:50:04.204280 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 4 00:50:04.204287 kernel: alternatives: applying boot alternatives Mar 4 00:50:04.204297 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:50:04.204304 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 00:50:04.204312 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 00:50:04.204320 kernel: Fallback order for Node 0: 0 Mar 4 00:50:04.204329 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 4 00:50:04.204337 kernel: Policy zone: Normal Mar 4 00:50:04.204346 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 00:50:04.204354 kernel: software IO TLB: area num 2. Mar 4 00:50:04.204361 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 4 00:50:04.204373 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 4 00:50:04.204381 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 4 00:50:04.204389 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 00:50:04.204398 kernel: rcu: RCU event tracing is enabled. Mar 4 00:50:04.204407 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 4 00:50:04.204414 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 00:50:04.204422 kernel: Tracing variant of Tasks RCU enabled. Mar 4 00:50:04.204429 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 00:50:04.204436 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 4 00:50:04.204443 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 4 00:50:04.204450 kernel: GICv3: 960 SPIs implemented Mar 4 00:50:04.204459 kernel: GICv3: 0 Extended SPIs implemented Mar 4 00:50:04.204467 kernel: Root IRQ handler: gic_handle_irq Mar 4 00:50:04.204474 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 4 00:50:04.204481 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 4 00:50:04.204488 kernel: ITS: No ITS available, not enabling LPIs Mar 4 00:50:04.204495 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 00:50:04.204502 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:50:04.204510 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 4 00:50:04.204517 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 4 00:50:04.204524 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 4 00:50:04.204531 kernel: Console: colour dummy device 80x25 Mar 4 00:50:04.204541 kernel: printk: console [tty1] enabled Mar 4 00:50:04.204548 kernel: ACPI: Core revision 20230628 Mar 4 00:50:04.204556 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 4 00:50:04.204563 kernel: pid_max: default: 32768 minimum: 301 Mar 4 00:50:04.204571 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 00:50:04.204578 kernel: landlock: Up and running. Mar 4 00:50:04.204585 kernel: SELinux: Initializing. Mar 4 00:50:04.204593 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:50:04.204600 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:50:04.204609 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:50:04.204617 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:50:04.204624 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 4 00:50:04.204632 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 4 00:50:04.204639 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 4 00:50:04.204646 kernel: rcu: Hierarchical SRCU implementation. Mar 4 00:50:04.204654 kernel: rcu: Max phase no-delay instances is 400. Mar 4 00:50:04.204661 kernel: Remapping and enabling EFI services. Mar 4 00:50:04.204676 kernel: smp: Bringing up secondary CPUs ... Mar 4 00:50:04.204683 kernel: Detected PIPT I-cache on CPU1 Mar 4 00:50:04.204691 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 4 00:50:04.204699 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:50:04.204708 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 4 00:50:04.204716 kernel: smp: Brought up 1 node, 2 CPUs Mar 4 00:50:04.204724 kernel: SMP: Total of 2 processors activated. Mar 4 00:50:04.204731 kernel: CPU features: detected: 32-bit EL0 Support Mar 4 00:50:04.204739 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 4 00:50:04.204748 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 4 00:50:04.204756 kernel: CPU features: detected: CRC32 instructions Mar 4 00:50:04.204764 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 4 00:50:04.204772 kernel: CPU features: detected: LSE atomic instructions Mar 4 00:50:04.204780 kernel: CPU features: detected: Privileged Access Never Mar 4 00:50:04.204787 kernel: CPU: All CPU(s) started at EL1 Mar 4 00:50:04.204795 kernel: alternatives: applying system-wide alternatives Mar 4 00:50:04.204803 kernel: devtmpfs: initialized Mar 4 00:50:04.204810 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 00:50:04.204820 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 4 00:50:04.204828 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 00:50:04.204836 kernel: SMBIOS 3.1.0 present. Mar 4 00:50:04.204844 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 4 00:50:04.204852 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 00:50:04.204860 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 4 00:50:04.204867 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 4 00:50:04.204875 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 4 00:50:04.204883 kernel: audit: initializing netlink subsys (disabled) Mar 4 00:50:04.204892 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 4 00:50:04.204900 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 00:50:04.204908 kernel: cpuidle: using governor menu Mar 4 00:50:04.204916 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 4 00:50:04.204923 kernel: ASID allocator initialised with 32768 entries Mar 4 00:50:04.204931 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 00:50:04.204939 kernel: Serial: AMBA PL011 UART driver Mar 4 00:50:04.204947 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 4 00:50:04.204954 kernel: Modules: 0 pages in range for non-PLT usage Mar 4 00:50:04.204964 kernel: Modules: 509008 pages in range for PLT usage Mar 4 00:50:04.204971 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 00:50:04.204980 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 00:50:04.204987 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 4 00:50:04.204995 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 4 00:50:04.205003 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 00:50:04.205011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 00:50:04.205019 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 4 00:50:04.205027 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 4 00:50:04.205036 kernel: ACPI: Added _OSI(Module Device) Mar 4 00:50:04.205044 kernel: ACPI: Added _OSI(Processor Device) Mar 4 00:50:04.205052 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 00:50:04.205059 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 00:50:04.205067 kernel: ACPI: Interpreter enabled Mar 4 00:50:04.205075 kernel: ACPI: Using GIC for interrupt routing Mar 4 00:50:04.205082 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 4 00:50:04.205090 kernel: printk: console [ttyAMA0] enabled Mar 4 00:50:04.205098 kernel: printk: bootconsole [pl11] disabled Mar 4 00:50:04.205107 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 4 00:50:04.205115 kernel: iommu: Default domain type: Translated Mar 4 00:50:04.205127 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 4 00:50:04.205136 kernel: efivars: Registered efivars operations Mar 4 00:50:04.205144 kernel: vgaarb: loaded Mar 4 00:50:04.205152 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 4 00:50:04.205159 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 00:50:04.205167 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 00:50:04.205175 kernel: pnp: PnP ACPI init Mar 4 00:50:04.205184 kernel: pnp: PnP ACPI: found 0 devices Mar 4 00:50:04.205192 kernel: NET: Registered PF_INET protocol family Mar 4 00:50:04.205200 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 00:50:04.205208 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 00:50:04.205216 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 00:50:04.205224 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 00:50:04.205231 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 00:50:04.205239 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 00:50:04.205247 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:50:04.205256 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:50:04.205264 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 00:50:04.205272 kernel: PCI: CLS 0 bytes, default 64 Mar 4 00:50:04.205279 kernel: kvm [1]: HYP mode not available Mar 4 00:50:04.205287 kernel: Initialise system trusted keyrings Mar 4 00:50:04.205294 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 00:50:04.205302 kernel: Key type asymmetric registered Mar 4 00:50:04.205310 kernel: Asymmetric key parser 'x509' registered Mar 4 00:50:04.205317 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 4 00:50:04.205327 kernel: io scheduler mq-deadline registered Mar 4 00:50:04.205334 kernel: io scheduler kyber registered Mar 4 00:50:04.205342 kernel: io scheduler bfq registered Mar 4 00:50:04.205350 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 00:50:04.205357 kernel: thunder_xcv, ver 1.0 Mar 4 00:50:04.205365 kernel: thunder_bgx, ver 1.0 Mar 4 00:50:04.205372 kernel: nicpf, ver 1.0 Mar 4 00:50:04.205380 kernel: nicvf, ver 1.0 Mar 4 00:50:04.205525 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 4 00:50:04.205601 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-04T00:50:03 UTC (1772585403) Mar 4 00:50:04.205612 kernel: efifb: probing for efifb Mar 4 00:50:04.205620 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 4 00:50:04.205628 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 4 00:50:04.205635 kernel: efifb: scrolling: redraw Mar 4 00:50:04.205643 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 4 00:50:04.205651 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 00:50:04.205659 kernel: fb0: EFI VGA frame buffer device Mar 4 00:50:04.205669 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 4 00:50:04.205677 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 4 00:50:04.205685 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 4 00:50:04.205692 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 4 00:50:04.205700 kernel: watchdog: Hard watchdog permanently disabled Mar 4 00:50:04.205708 kernel: NET: Registered PF_INET6 protocol family Mar 4 00:50:04.205715 kernel: Segment Routing with IPv6 Mar 4 00:50:04.205723 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 00:50:04.205731 kernel: NET: Registered PF_PACKET protocol family Mar 4 00:50:04.205740 kernel: Key type dns_resolver registered Mar 4 00:50:04.205748 kernel: registered taskstats version 1 Mar 4 00:50:04.205756 kernel: Loading compiled-in X.509 certificates Mar 4 00:50:04.205763 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: f9e9add37a55ffc89aa4c4c76a356167cf3fd659' Mar 4 00:50:04.205771 kernel: Key type .fscrypt registered Mar 4 00:50:04.205779 kernel: Key type fscrypt-provisioning registered Mar 4 00:50:04.205786 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 00:50:04.205794 kernel: ima: Allocated hash algorithm: sha1 Mar 4 00:50:04.205802 kernel: ima: No architecture policies found Mar 4 00:50:04.205811 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 4 00:50:04.205819 kernel: clk: Disabling unused clocks Mar 4 00:50:04.205827 kernel: Freeing unused kernel memory: 39424K Mar 4 00:50:04.205834 kernel: Run /init as init process Mar 4 00:50:04.205842 kernel: with arguments: Mar 4 00:50:04.205850 kernel: /init Mar 4 00:50:04.205858 kernel: with environment: Mar 4 00:50:04.205865 kernel: HOME=/ Mar 4 00:50:04.205873 kernel: TERM=linux Mar 4 00:50:04.205882 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:50:04.205894 systemd[1]: Detected virtualization microsoft. Mar 4 00:50:04.205902 systemd[1]: Detected architecture arm64. Mar 4 00:50:04.205910 systemd[1]: Running in initrd. Mar 4 00:50:04.205918 systemd[1]: No hostname configured, using default hostname. Mar 4 00:50:04.205925 systemd[1]: Hostname set to . Mar 4 00:50:04.205934 systemd[1]: Initializing machine ID from random generator. Mar 4 00:50:04.205943 systemd[1]: Queued start job for default target initrd.target. Mar 4 00:50:04.205952 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:50:04.205960 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:50:04.205969 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 00:50:04.205977 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:50:04.205986 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 00:50:04.205994 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 00:50:04.206004 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 00:50:04.206014 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 00:50:04.206022 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:50:04.206031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:50:04.206039 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:50:04.206047 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:50:04.206055 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:50:04.206063 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:50:04.206072 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:50:04.206081 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:50:04.206090 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 00:50:04.206098 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 00:50:04.206106 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:50:04.206114 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:50:04.206122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:50:04.208166 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:50:04.208176 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 00:50:04.208190 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:50:04.208198 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 00:50:04.208207 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 00:50:04.208215 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:50:04.208223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:50:04.208266 systemd-journald[217]: Collecting audit messages is disabled. Mar 4 00:50:04.208290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:50:04.208299 systemd-journald[217]: Journal started Mar 4 00:50:04.208318 systemd-journald[217]: Runtime Journal (/run/log/journal/7cdd3bdc57ee4cfc9a42e2f62680ab85) is 8.0M, max 78.5M, 70.5M free. Mar 4 00:50:04.214993 systemd-modules-load[218]: Inserted module 'overlay' Mar 4 00:50:04.236811 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:50:04.236838 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 00:50:04.241358 kernel: Bridge firewalling registered Mar 4 00:50:04.243789 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 4 00:50:04.247144 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 00:50:04.256253 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:50:04.272114 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 00:50:04.279643 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:50:04.284508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:50:04.304505 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:50:04.311274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:50:04.330315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:50:04.342822 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:50:04.357259 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:50:04.368675 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:50:04.385618 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:50:04.396556 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:50:04.416635 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 00:50:04.430895 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:50:04.441719 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:50:04.457094 dracut-cmdline[249]: dracut-dracut-053 Mar 4 00:50:04.464655 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:50:04.491724 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:50:04.495961 systemd-resolved[254]: Positive Trust Anchors: Mar 4 00:50:04.495971 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:50:04.496002 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:50:04.498195 systemd-resolved[254]: Defaulting to hostname 'linux'. Mar 4 00:50:04.504570 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:50:04.509736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:50:04.604139 kernel: SCSI subsystem initialized Mar 4 00:50:04.610144 kernel: Loading iSCSI transport class v2.0-870. Mar 4 00:50:04.621147 kernel: iscsi: registered transport (tcp) Mar 4 00:50:04.638148 kernel: iscsi: registered transport (qla4xxx) Mar 4 00:50:04.638211 kernel: QLogic iSCSI HBA Driver Mar 4 00:50:04.672908 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 00:50:04.686250 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 00:50:04.721646 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 00:50:04.721711 kernel: device-mapper: uevent: version 1.0.3 Mar 4 00:50:04.727113 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 00:50:04.777141 kernel: raid6: neonx8 gen() 15802 MB/s Mar 4 00:50:04.794138 kernel: raid6: neonx4 gen() 15687 MB/s Mar 4 00:50:04.813134 kernel: raid6: neonx2 gen() 13246 MB/s Mar 4 00:50:04.833135 kernel: raid6: neonx1 gen() 10482 MB/s Mar 4 00:50:04.852130 kernel: raid6: int64x8 gen() 6982 MB/s Mar 4 00:50:04.872130 kernel: raid6: int64x4 gen() 7360 MB/s Mar 4 00:50:04.892131 kernel: raid6: int64x2 gen() 6145 MB/s Mar 4 00:50:04.915187 kernel: raid6: int64x1 gen() 5072 MB/s Mar 4 00:50:04.915207 kernel: raid6: using algorithm neonx8 gen() 15802 MB/s Mar 4 00:50:04.937799 kernel: raid6: .... xor() 12046 MB/s, rmw enabled Mar 4 00:50:04.937819 kernel: raid6: using neon recovery algorithm Mar 4 00:50:04.947632 kernel: xor: measuring software checksum speed Mar 4 00:50:04.947647 kernel: 8regs : 19821 MB/sec Mar 4 00:50:04.951274 kernel: 32regs : 19650 MB/sec Mar 4 00:50:04.954045 kernel: arm64_neon : 27061 MB/sec Mar 4 00:50:04.957213 kernel: xor: using function: arm64_neon (27061 MB/sec) Mar 4 00:50:05.007137 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 00:50:05.016982 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:50:05.030274 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:50:05.050952 systemd-udevd[437]: Using default interface naming scheme 'v255'. Mar 4 00:50:05.055479 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:50:05.075385 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 00:50:05.094509 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Mar 4 00:50:05.122841 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:50:05.141288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:50:05.177207 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:50:05.191488 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 00:50:05.218189 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 00:50:05.236982 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:50:05.251465 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:50:05.262336 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:50:05.277403 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 00:50:05.300502 kernel: hv_vmbus: Vmbus version:5.3 Mar 4 00:50:05.300526 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 4 00:50:05.300537 kernel: hv_vmbus: registering driver hid_hyperv Mar 4 00:50:05.308441 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 4 00:50:05.312160 kernel: hv_vmbus: registering driver hv_storvsc Mar 4 00:50:05.320371 kernel: scsi host0: storvsc_host_t Mar 4 00:50:05.320605 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 4 00:50:05.328150 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 4 00:50:05.328606 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:50:05.339797 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 4 00:50:05.359773 kernel: hv_vmbus: registering driver hv_netvsc Mar 4 00:50:05.359805 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 4 00:50:05.359816 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 4 00:50:05.359919 kernel: scsi host1: storvsc_host_t Mar 4 00:50:05.360358 kernel: PTP clock support registered Mar 4 00:50:05.364973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:50:05.378716 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 4 00:50:05.365190 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:50:05.390616 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:50:05.395847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:50:05.396037 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:50:05.406866 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:50:05.426493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:50:05.458342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:50:05.471366 kernel: hv_utils: Registering HyperV Utility Driver Mar 4 00:50:05.471392 kernel: hv_vmbus: registering driver hv_utils Mar 4 00:50:05.490212 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 4 00:50:05.490449 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 4 00:50:05.490553 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 4 00:50:05.490638 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 00:50:05.497264 kernel: hv_utils: Heartbeat IC version 3.0 Mar 4 00:50:05.497332 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 4 00:50:05.497516 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 4 00:50:05.494389 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:50:05.928353 kernel: hv_utils: Shutdown IC version 3.2 Mar 4 00:50:05.928379 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 4 00:50:05.928533 kernel: hv_utils: TimeSync IC version 4.0 Mar 4 00:50:05.928544 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 4 00:50:05.928635 kernel: hv_netvsc 002248c0-74aa-0022-48c0-74aa002248c0 eth0: VF slot 1 added Mar 4 00:50:05.928743 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:50:05.908439 systemd-resolved[254]: Clock change detected. Flushing caches. Mar 4 00:50:05.944642 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 4 00:50:05.944816 kernel: hv_vmbus: registering driver hv_pci Mar 4 00:50:05.954353 kernel: hv_pci a99cb832-ffe2-4b5b-80cf-f3e7ce393e09: PCI VMBus probing: Using version 0x10004 Mar 4 00:50:05.966398 kernel: hv_pci a99cb832-ffe2-4b5b-80cf-f3e7ce393e09: PCI host bridge to bus ffe2:00 Mar 4 00:50:05.979862 kernel: pci_bus ffe2:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 4 00:50:05.980001 kernel: pci_bus ffe2:00: No busn resource found for root bus, will use [bus 00-ff] Mar 4 00:50:05.980092 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#295 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:50:05.980183 kernel: pci ffe2:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 4 00:50:05.993955 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:50:06.014780 kernel: pci ffe2:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:50:06.014825 kernel: pci ffe2:00:02.0: enabling Extended Tags Mar 4 00:50:06.039423 kernel: pci ffe2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ffe2:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 4 00:50:06.039641 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#277 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:50:06.039737 kernel: pci_bus ffe2:00: busn_res: [bus 00-ff] end is updated to 00 Mar 4 00:50:06.050651 kernel: pci ffe2:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:50:06.091915 kernel: mlx5_core ffe2:00:02.0: enabling device (0000 -> 0002) Mar 4 00:50:06.098319 kernel: mlx5_core ffe2:00:02.0: firmware version: 16.30.5026 Mar 4 00:50:06.302620 kernel: hv_netvsc 002248c0-74aa-0022-48c0-74aa002248c0 eth0: VF registering: eth1 Mar 4 00:50:06.302822 kernel: mlx5_core ffe2:00:02.0 eth1: joined to eth0 Mar 4 00:50:06.309368 kernel: mlx5_core ffe2:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 4 00:50:06.319330 kernel: mlx5_core ffe2:00:02.0 enP65506s1: renamed from eth1 Mar 4 00:50:06.547323 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (484) Mar 4 00:50:06.566334 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 00:50:06.587753 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 4 00:50:06.603980 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 4 00:50:06.659318 kernel: BTRFS: device fsid aea7b15d-9414-4172-952e-52d0c2e5c89d devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (501) Mar 4 00:50:06.673632 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 4 00:50:06.678954 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 4 00:50:06.705507 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 00:50:06.730326 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:50:06.738326 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:50:07.741122 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:50:07.741185 disk-uuid[606]: The operation has completed successfully. Mar 4 00:50:07.810628 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 00:50:07.814480 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 00:50:07.843505 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 00:50:07.854863 sh[692]: Success Mar 4 00:50:07.886341 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 4 00:50:08.166659 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 00:50:08.172532 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 00:50:08.185435 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 00:50:08.214302 kernel: BTRFS info (device dm-0): first mount of filesystem aea7b15d-9414-4172-952e-52d0c2e5c89d Mar 4 00:50:08.214359 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:50:08.220090 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 00:50:08.224295 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 00:50:08.227968 kernel: BTRFS info (device dm-0): using free space tree Mar 4 00:50:08.568493 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 00:50:08.572677 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 00:50:08.590552 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 00:50:08.598499 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 00:50:08.633793 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:50:08.633845 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:50:08.637540 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:50:08.680561 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:50:08.699340 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:50:08.701529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:50:08.730323 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:50:08.734090 systemd-networkd[866]: lo: Link UP Mar 4 00:50:08.735348 systemd-networkd[866]: lo: Gained carrier Mar 4 00:50:08.738387 systemd-networkd[866]: Enumeration completed Mar 4 00:50:08.739096 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:50:08.742529 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:50:08.742533 systemd-networkd[866]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:50:08.750333 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 00:50:08.756782 systemd[1]: Reached target network.target - Network. Mar 4 00:50:08.785570 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 00:50:08.843325 kernel: mlx5_core ffe2:00:02.0 enP65506s1: Link up Mar 4 00:50:08.884465 kernel: hv_netvsc 002248c0-74aa-0022-48c0-74aa002248c0 eth0: Data path switched to VF: enP65506s1 Mar 4 00:50:08.884532 systemd-networkd[866]: enP65506s1: Link UP Mar 4 00:50:08.884612 systemd-networkd[866]: eth0: Link UP Mar 4 00:50:08.884736 systemd-networkd[866]: eth0: Gained carrier Mar 4 00:50:08.884744 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:50:08.905513 systemd-networkd[866]: enP65506s1: Gained carrier Mar 4 00:50:08.920354 systemd-networkd[866]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:50:09.857546 ignition[877]: Ignition 2.19.0 Mar 4 00:50:09.857557 ignition[877]: Stage: fetch-offline Mar 4 00:50:09.857597 ignition[877]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:09.861512 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:50:09.857604 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:09.880448 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 4 00:50:09.857705 ignition[877]: parsed url from cmdline: "" Mar 4 00:50:09.857708 ignition[877]: no config URL provided Mar 4 00:50:09.857713 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:50:09.857720 ignition[877]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:50:09.857724 ignition[877]: failed to fetch config: resource requires networking Mar 4 00:50:09.860509 ignition[877]: Ignition finished successfully Mar 4 00:50:09.900201 ignition[885]: Ignition 2.19.0 Mar 4 00:50:09.900208 ignition[885]: Stage: fetch Mar 4 00:50:09.900395 ignition[885]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:09.900405 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:09.900498 ignition[885]: parsed url from cmdline: "" Mar 4 00:50:09.900501 ignition[885]: no config URL provided Mar 4 00:50:09.900506 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:50:09.900512 ignition[885]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:50:09.900533 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 4 00:50:09.993408 systemd-networkd[866]: eth0: Gained IPv6LL Mar 4 00:50:10.035171 ignition[885]: GET result: OK Mar 4 00:50:10.035264 ignition[885]: config has been read from IMDS userdata Mar 4 00:50:10.035339 ignition[885]: parsing config with SHA512: 99be2b9930bbec9c9f380d896fc110ff8b30666d195ca907bc22e27d4473d706a142c6601bfd574f686590a23b8c2e4f77dd58bba88193cb70b387bd083d0003 Mar 4 00:50:10.039503 unknown[885]: fetched base config from "system" Mar 4 00:50:10.039511 unknown[885]: fetched base config from "system" Mar 4 00:50:10.041711 ignition[885]: fetch: fetch complete Mar 4 00:50:10.039517 unknown[885]: fetched user config from "azure" Mar 4 00:50:10.041716 ignition[885]: fetch: fetch passed Mar 4 00:50:10.043492 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 4 00:50:10.041778 ignition[885]: Ignition finished successfully Mar 4 00:50:10.061579 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 00:50:10.081143 ignition[891]: Ignition 2.19.0 Mar 4 00:50:10.081152 ignition[891]: Stage: kargs Mar 4 00:50:10.086618 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 00:50:10.081340 ignition[891]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:10.081349 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:10.082269 ignition[891]: kargs: kargs passed Mar 4 00:50:10.082333 ignition[891]: Ignition finished successfully Mar 4 00:50:10.107586 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 00:50:10.125897 ignition[897]: Ignition 2.19.0 Mar 4 00:50:10.125906 ignition[897]: Stage: disks Mar 4 00:50:10.130110 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 00:50:10.126077 ignition[897]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:10.136642 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 00:50:10.126086 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:10.145535 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 00:50:10.126993 ignition[897]: disks: disks passed Mar 4 00:50:10.154724 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:50:10.127036 ignition[897]: Ignition finished successfully Mar 4 00:50:10.163897 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:50:10.173199 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:50:10.190560 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 00:50:10.278370 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 4 00:50:10.286564 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 00:50:10.301537 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 00:50:10.362322 kernel: EXT4-fs (sda9): mounted filesystem e47fe8fd-dacc-429e-aef1-b03916169c3c r/w with ordered data mode. Quota mode: none. Mar 4 00:50:10.363248 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 00:50:10.367046 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 00:50:10.414409 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:50:10.444791 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (916) Mar 4 00:50:10.444853 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:50:10.444866 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:50:10.448465 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:50:10.449508 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 00:50:10.456496 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 4 00:50:10.474334 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:50:10.475853 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 00:50:10.475898 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:50:10.482647 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:50:10.495396 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 00:50:10.516590 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 00:50:11.168648 coreos-metadata[931]: Mar 04 00:50:11.168 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 00:50:11.177076 coreos-metadata[931]: Mar 04 00:50:11.177 INFO Fetch successful Mar 4 00:50:11.181538 coreos-metadata[931]: Mar 04 00:50:11.181 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 4 00:50:11.200025 coreos-metadata[931]: Mar 04 00:50:11.200 INFO Fetch successful Mar 4 00:50:11.221666 coreos-metadata[931]: Mar 04 00:50:11.221 INFO wrote hostname ci-4081.3.6-n-4860195aa5 to /sysroot/etc/hostname Mar 4 00:50:11.230088 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:50:11.402882 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 00:50:11.447808 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Mar 4 00:50:11.456359 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 00:50:11.464168 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 00:50:12.638245 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 00:50:12.650510 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 00:50:12.656724 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 00:50:12.677843 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 00:50:12.681680 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:50:12.706204 ignition[1035]: INFO : Ignition 2.19.0 Mar 4 00:50:12.706204 ignition[1035]: INFO : Stage: mount Mar 4 00:50:12.721714 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:12.721714 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:12.721714 ignition[1035]: INFO : mount: mount passed Mar 4 00:50:12.721714 ignition[1035]: INFO : Ignition finished successfully Mar 4 00:50:12.709196 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 00:50:12.714793 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 00:50:12.737445 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 00:50:12.751550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:50:12.779322 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1046) Mar 4 00:50:12.790074 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:50:12.790117 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:50:12.793379 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:50:12.801335 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:50:12.801690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:50:12.826984 ignition[1063]: INFO : Ignition 2.19.0 Mar 4 00:50:12.826984 ignition[1063]: INFO : Stage: files Mar 4 00:50:12.833120 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:12.833120 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:12.833120 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Mar 4 00:50:12.847880 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 00:50:12.847880 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 00:50:12.982960 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 00:50:12.988761 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 00:50:12.988761 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 00:50:12.983626 unknown[1063]: wrote ssh authorized keys file for user: core Mar 4 00:50:13.003747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 00:50:13.003747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 00:50:13.003747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:50:13.003747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 4 00:50:13.057921 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 4 00:50:13.261211 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:50:13.261211 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 4 00:50:13.694717 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 4 00:50:14.152660 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:50:14.152660 ignition[1063]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 4 00:50:14.262530 ignition[1063]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: files passed Mar 4 00:50:14.273347 ignition[1063]: INFO : Ignition finished successfully Mar 4 00:50:14.274577 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 00:50:14.300035 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 00:50:14.312525 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 00:50:14.390708 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:50:14.329789 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 00:50:14.405400 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:50:14.405400 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:50:14.329899 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 00:50:14.379670 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:50:14.385934 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 00:50:14.406579 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 00:50:14.450782 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 00:50:14.450941 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 00:50:14.460052 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 00:50:14.468972 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 00:50:14.477047 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 00:50:14.489566 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 00:50:14.501927 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:50:14.516425 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 00:50:14.532138 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:50:14.537819 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:50:14.547378 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 00:50:14.555766 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 00:50:14.555942 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:50:14.568279 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 00:50:14.577593 systemd[1]: Stopped target basic.target - Basic System. Mar 4 00:50:14.585827 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 00:50:14.594282 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:50:14.603529 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 00:50:14.613063 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 00:50:14.621719 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:50:14.631133 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 00:50:14.640560 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 00:50:14.648804 systemd[1]: Stopped target swap.target - Swaps. Mar 4 00:50:14.656620 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 00:50:14.656795 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:50:14.667985 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:50:14.676803 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:50:14.686059 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 00:50:14.686165 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:50:14.696017 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 00:50:14.696181 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 00:50:14.709861 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 00:50:14.710033 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:50:14.719274 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 00:50:14.719437 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 00:50:14.727616 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 4 00:50:14.727766 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:50:14.757432 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 00:50:14.764890 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 00:50:14.765122 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:50:14.786410 ignition[1116]: INFO : Ignition 2.19.0 Mar 4 00:50:14.786410 ignition[1116]: INFO : Stage: umount Mar 4 00:50:14.786410 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:14.786410 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:14.786410 ignition[1116]: INFO : umount: umount passed Mar 4 00:50:14.786410 ignition[1116]: INFO : Ignition finished successfully Mar 4 00:50:14.793457 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 00:50:14.800284 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 00:50:14.800550 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:50:14.805812 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 00:50:14.805961 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:50:14.820752 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 00:50:14.821489 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 00:50:14.821593 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 00:50:14.833532 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 00:50:14.833638 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 00:50:14.840793 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 00:50:14.840847 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 00:50:14.847726 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 00:50:14.847770 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 00:50:14.856147 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 4 00:50:14.856198 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 4 00:50:14.864006 systemd[1]: Stopped target network.target - Network. Mar 4 00:50:14.871652 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 00:50:14.871703 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:50:14.882452 systemd[1]: Stopped target paths.target - Path Units. Mar 4 00:50:14.890067 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 00:50:14.893326 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:50:14.899480 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 00:50:14.906985 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 00:50:14.914699 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 00:50:14.914746 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:50:14.924393 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 00:50:14.924444 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:50:14.934517 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 00:50:14.934565 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 00:50:14.943839 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 00:50:14.943880 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 00:50:14.952012 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 00:50:14.959850 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 00:50:14.972340 systemd-networkd[866]: eth0: DHCPv6 lease lost Mar 4 00:50:14.973702 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 00:50:14.973939 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 00:50:14.983754 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 00:50:14.984161 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 00:50:14.992287 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 00:50:15.156503 kernel: hv_netvsc 002248c0-74aa-0022-48c0-74aa002248c0 eth0: Data path switched from VF: enP65506s1 Mar 4 00:50:14.992353 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:50:15.017516 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 00:50:15.027057 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 00:50:15.027124 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:50:15.036001 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 00:50:15.036050 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:50:15.044081 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 00:50:15.044129 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 00:50:15.053028 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 00:50:15.053074 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:50:15.061914 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:50:15.094664 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 00:50:15.094848 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:50:15.104985 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 00:50:15.105051 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 00:50:15.113634 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 00:50:15.113673 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:50:15.121002 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 00:50:15.121047 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:50:15.133545 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 00:50:15.133596 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 00:50:15.152806 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:50:15.152886 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:50:15.185485 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 00:50:15.195375 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 00:50:15.195444 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:50:15.206449 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:50:15.206494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:50:15.216364 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 00:50:15.216475 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 00:50:15.224768 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 00:50:15.224859 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 00:50:15.712575 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 00:50:15.712693 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 00:50:15.716915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 00:50:15.724607 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 00:50:15.724671 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 00:50:15.745603 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 00:50:15.921411 systemd[1]: Switching root. Mar 4 00:50:15.949629 systemd-journald[217]: Journal stopped Mar 4 00:50:04.201721 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 4 00:50:04.201744 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 3 22:54:15 -00 2026 Mar 4 00:50:04.201753 kernel: KASLR enabled Mar 4 00:50:04.201759 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 4 00:50:04.201766 kernel: printk: bootconsole [pl11] enabled Mar 4 00:50:04.201772 kernel: efi: EFI v2.7 by EDK II Mar 4 00:50:04.201780 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 4 00:50:04.201786 kernel: random: crng init done Mar 4 00:50:04.201792 kernel: ACPI: Early table checksum verification disabled Mar 4 00:50:04.201798 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 4 00:50:04.201805 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201811 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201818 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 4 00:50:04.201825 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201833 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201839 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201846 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201854 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201861 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201868 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 4 00:50:04.201874 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:50:04.201881 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 4 00:50:04.201888 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 4 00:50:04.201895 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 4 00:50:04.201901 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 4 00:50:04.201908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 4 00:50:04.201914 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 4 00:50:04.201921 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 4 00:50:04.201929 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 4 00:50:04.201936 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 4 00:50:04.201943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 4 00:50:04.201950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 4 00:50:04.201956 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 4 00:50:04.201963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 4 00:50:04.201970 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 4 00:50:04.201976 kernel: Zone ranges: Mar 4 00:50:04.201983 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 4 00:50:04.201989 kernel: DMA32 empty Mar 4 00:50:04.201996 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:50:04.202003 kernel: Movable zone start for each node Mar 4 00:50:04.202014 kernel: Early memory node ranges Mar 4 00:50:04.202021 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 4 00:50:04.202028 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 4 00:50:04.202035 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 4 00:50:04.202042 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 4 00:50:04.202051 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 4 00:50:04.202058 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 4 00:50:04.202065 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:50:04.202073 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 4 00:50:04.202080 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 4 00:50:04.202087 kernel: psci: probing for conduit method from ACPI. Mar 4 00:50:04.202093 kernel: psci: PSCIv1.1 detected in firmware. Mar 4 00:50:04.202100 kernel: psci: Using standard PSCI v0.2 function IDs Mar 4 00:50:04.202107 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 4 00:50:04.202114 kernel: psci: SMC Calling Convention v1.4 Mar 4 00:50:04.202121 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 4 00:50:04.204172 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 4 00:50:04.204189 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 4 00:50:04.204197 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 4 00:50:04.204206 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 4 00:50:04.204213 kernel: Detected PIPT I-cache on CPU0 Mar 4 00:50:04.204222 kernel: CPU features: detected: GIC system register CPU interface Mar 4 00:50:04.204230 kernel: CPU features: detected: Hardware dirty bit management Mar 4 00:50:04.204238 kernel: CPU features: detected: Spectre-BHB Mar 4 00:50:04.204247 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 4 00:50:04.204254 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 4 00:50:04.204263 kernel: CPU features: detected: ARM erratum 1418040 Mar 4 00:50:04.204271 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 4 00:50:04.204280 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 4 00:50:04.204287 kernel: alternatives: applying boot alternatives Mar 4 00:50:04.204297 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:50:04.204304 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 00:50:04.204312 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 00:50:04.204320 kernel: Fallback order for Node 0: 0 Mar 4 00:50:04.204329 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 4 00:50:04.204337 kernel: Policy zone: Normal Mar 4 00:50:04.204346 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 00:50:04.204354 kernel: software IO TLB: area num 2. Mar 4 00:50:04.204361 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 4 00:50:04.204373 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 4 00:50:04.204381 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 4 00:50:04.204389 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 00:50:04.204398 kernel: rcu: RCU event tracing is enabled. Mar 4 00:50:04.204407 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 4 00:50:04.204414 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 00:50:04.204422 kernel: Tracing variant of Tasks RCU enabled. Mar 4 00:50:04.204429 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 00:50:04.204436 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 4 00:50:04.204443 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 4 00:50:04.204450 kernel: GICv3: 960 SPIs implemented Mar 4 00:50:04.204459 kernel: GICv3: 0 Extended SPIs implemented Mar 4 00:50:04.204467 kernel: Root IRQ handler: gic_handle_irq Mar 4 00:50:04.204474 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 4 00:50:04.204481 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 4 00:50:04.204488 kernel: ITS: No ITS available, not enabling LPIs Mar 4 00:50:04.204495 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 00:50:04.204502 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:50:04.204510 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 4 00:50:04.204517 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 4 00:50:04.204524 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 4 00:50:04.204531 kernel: Console: colour dummy device 80x25 Mar 4 00:50:04.204541 kernel: printk: console [tty1] enabled Mar 4 00:50:04.204548 kernel: ACPI: Core revision 20230628 Mar 4 00:50:04.204556 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 4 00:50:04.204563 kernel: pid_max: default: 32768 minimum: 301 Mar 4 00:50:04.204571 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 00:50:04.204578 kernel: landlock: Up and running. Mar 4 00:50:04.204585 kernel: SELinux: Initializing. Mar 4 00:50:04.204593 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:50:04.204600 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:50:04.204609 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:50:04.204617 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:50:04.204624 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 4 00:50:04.204632 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 4 00:50:04.204639 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 4 00:50:04.204646 kernel: rcu: Hierarchical SRCU implementation. Mar 4 00:50:04.204654 kernel: rcu: Max phase no-delay instances is 400. Mar 4 00:50:04.204661 kernel: Remapping and enabling EFI services. Mar 4 00:50:04.204676 kernel: smp: Bringing up secondary CPUs ... Mar 4 00:50:04.204683 kernel: Detected PIPT I-cache on CPU1 Mar 4 00:50:04.204691 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 4 00:50:04.204699 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:50:04.204708 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 4 00:50:04.204716 kernel: smp: Brought up 1 node, 2 CPUs Mar 4 00:50:04.204724 kernel: SMP: Total of 2 processors activated. Mar 4 00:50:04.204731 kernel: CPU features: detected: 32-bit EL0 Support Mar 4 00:50:04.204739 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 4 00:50:04.204748 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 4 00:50:04.204756 kernel: CPU features: detected: CRC32 instructions Mar 4 00:50:04.204764 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 4 00:50:04.204772 kernel: CPU features: detected: LSE atomic instructions Mar 4 00:50:04.204780 kernel: CPU features: detected: Privileged Access Never Mar 4 00:50:04.204787 kernel: CPU: All CPU(s) started at EL1 Mar 4 00:50:04.204795 kernel: alternatives: applying system-wide alternatives Mar 4 00:50:04.204803 kernel: devtmpfs: initialized Mar 4 00:50:04.204810 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 00:50:04.204820 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 4 00:50:04.204828 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 00:50:04.204836 kernel: SMBIOS 3.1.0 present. Mar 4 00:50:04.204844 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 4 00:50:04.204852 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 00:50:04.204860 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 4 00:50:04.204867 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 4 00:50:04.204875 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 4 00:50:04.204883 kernel: audit: initializing netlink subsys (disabled) Mar 4 00:50:04.204892 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 4 00:50:04.204900 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 00:50:04.204908 kernel: cpuidle: using governor menu Mar 4 00:50:04.204916 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 4 00:50:04.204923 kernel: ASID allocator initialised with 32768 entries Mar 4 00:50:04.204931 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 00:50:04.204939 kernel: Serial: AMBA PL011 UART driver Mar 4 00:50:04.204947 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 4 00:50:04.204954 kernel: Modules: 0 pages in range for non-PLT usage Mar 4 00:50:04.204964 kernel: Modules: 509008 pages in range for PLT usage Mar 4 00:50:04.204971 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 00:50:04.204980 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 00:50:04.204987 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 4 00:50:04.204995 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 4 00:50:04.205003 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 00:50:04.205011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 00:50:04.205019 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 4 00:50:04.205027 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 4 00:50:04.205036 kernel: ACPI: Added _OSI(Module Device) Mar 4 00:50:04.205044 kernel: ACPI: Added _OSI(Processor Device) Mar 4 00:50:04.205052 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 00:50:04.205059 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 00:50:04.205067 kernel: ACPI: Interpreter enabled Mar 4 00:50:04.205075 kernel: ACPI: Using GIC for interrupt routing Mar 4 00:50:04.205082 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 4 00:50:04.205090 kernel: printk: console [ttyAMA0] enabled Mar 4 00:50:04.205098 kernel: printk: bootconsole [pl11] disabled Mar 4 00:50:04.205107 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 4 00:50:04.205115 kernel: iommu: Default domain type: Translated Mar 4 00:50:04.205127 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 4 00:50:04.205136 kernel: efivars: Registered efivars operations Mar 4 00:50:04.205144 kernel: vgaarb: loaded Mar 4 00:50:04.205152 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 4 00:50:04.205159 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 00:50:04.205167 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 00:50:04.205175 kernel: pnp: PnP ACPI init Mar 4 00:50:04.205184 kernel: pnp: PnP ACPI: found 0 devices Mar 4 00:50:04.205192 kernel: NET: Registered PF_INET protocol family Mar 4 00:50:04.205200 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 00:50:04.205208 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 00:50:04.205216 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 00:50:04.205224 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 00:50:04.205231 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 00:50:04.205239 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 00:50:04.205247 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:50:04.205256 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:50:04.205264 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 00:50:04.205272 kernel: PCI: CLS 0 bytes, default 64 Mar 4 00:50:04.205279 kernel: kvm [1]: HYP mode not available Mar 4 00:50:04.205287 kernel: Initialise system trusted keyrings Mar 4 00:50:04.205294 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 00:50:04.205302 kernel: Key type asymmetric registered Mar 4 00:50:04.205310 kernel: Asymmetric key parser 'x509' registered Mar 4 00:50:04.205317 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 4 00:50:04.205327 kernel: io scheduler mq-deadline registered Mar 4 00:50:04.205334 kernel: io scheduler kyber registered Mar 4 00:50:04.205342 kernel: io scheduler bfq registered Mar 4 00:50:04.205350 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 00:50:04.205357 kernel: thunder_xcv, ver 1.0 Mar 4 00:50:04.205365 kernel: thunder_bgx, ver 1.0 Mar 4 00:50:04.205372 kernel: nicpf, ver 1.0 Mar 4 00:50:04.205380 kernel: nicvf, ver 1.0 Mar 4 00:50:04.205525 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 4 00:50:04.205601 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-04T00:50:03 UTC (1772585403) Mar 4 00:50:04.205612 kernel: efifb: probing for efifb Mar 4 00:50:04.205620 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 4 00:50:04.205628 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 4 00:50:04.205635 kernel: efifb: scrolling: redraw Mar 4 00:50:04.205643 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 4 00:50:04.205651 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 00:50:04.205659 kernel: fb0: EFI VGA frame buffer device Mar 4 00:50:04.205669 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 4 00:50:04.205677 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 4 00:50:04.205685 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 4 00:50:04.205692 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 4 00:50:04.205700 kernel: watchdog: Hard watchdog permanently disabled Mar 4 00:50:04.205708 kernel: NET: Registered PF_INET6 protocol family Mar 4 00:50:04.205715 kernel: Segment Routing with IPv6 Mar 4 00:50:04.205723 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 00:50:04.205731 kernel: NET: Registered PF_PACKET protocol family Mar 4 00:50:04.205740 kernel: Key type dns_resolver registered Mar 4 00:50:04.205748 kernel: registered taskstats version 1 Mar 4 00:50:04.205756 kernel: Loading compiled-in X.509 certificates Mar 4 00:50:04.205763 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: f9e9add37a55ffc89aa4c4c76a356167cf3fd659' Mar 4 00:50:04.205771 kernel: Key type .fscrypt registered Mar 4 00:50:04.205779 kernel: Key type fscrypt-provisioning registered Mar 4 00:50:04.205786 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 00:50:04.205794 kernel: ima: Allocated hash algorithm: sha1 Mar 4 00:50:04.205802 kernel: ima: No architecture policies found Mar 4 00:50:04.205811 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 4 00:50:04.205819 kernel: clk: Disabling unused clocks Mar 4 00:50:04.205827 kernel: Freeing unused kernel memory: 39424K Mar 4 00:50:04.205834 kernel: Run /init as init process Mar 4 00:50:04.205842 kernel: with arguments: Mar 4 00:50:04.205850 kernel: /init Mar 4 00:50:04.205858 kernel: with environment: Mar 4 00:50:04.205865 kernel: HOME=/ Mar 4 00:50:04.205873 kernel: TERM=linux Mar 4 00:50:04.205882 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:50:04.205894 systemd[1]: Detected virtualization microsoft. Mar 4 00:50:04.205902 systemd[1]: Detected architecture arm64. Mar 4 00:50:04.205910 systemd[1]: Running in initrd. Mar 4 00:50:04.205918 systemd[1]: No hostname configured, using default hostname. Mar 4 00:50:04.205925 systemd[1]: Hostname set to . Mar 4 00:50:04.205934 systemd[1]: Initializing machine ID from random generator. Mar 4 00:50:04.205943 systemd[1]: Queued start job for default target initrd.target. Mar 4 00:50:04.205952 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:50:04.205960 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:50:04.205969 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 00:50:04.205977 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:50:04.205986 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 00:50:04.205994 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 00:50:04.206004 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 00:50:04.206014 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 00:50:04.206022 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:50:04.206031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:50:04.206039 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:50:04.206047 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:50:04.206055 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:50:04.206063 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:50:04.206072 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:50:04.206081 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:50:04.206090 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 00:50:04.206098 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 00:50:04.206106 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:50:04.206114 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:50:04.206122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:50:04.208166 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:50:04.208176 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 00:50:04.208190 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:50:04.208198 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 00:50:04.208207 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 00:50:04.208215 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:50:04.208223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:50:04.208266 systemd-journald[217]: Collecting audit messages is disabled. Mar 4 00:50:04.208290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:50:04.208299 systemd-journald[217]: Journal started Mar 4 00:50:04.208318 systemd-journald[217]: Runtime Journal (/run/log/journal/7cdd3bdc57ee4cfc9a42e2f62680ab85) is 8.0M, max 78.5M, 70.5M free. Mar 4 00:50:04.214993 systemd-modules-load[218]: Inserted module 'overlay' Mar 4 00:50:04.236811 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:50:04.236838 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 00:50:04.241358 kernel: Bridge firewalling registered Mar 4 00:50:04.243789 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 4 00:50:04.247144 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 00:50:04.256253 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:50:04.272114 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 00:50:04.279643 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:50:04.284508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:50:04.304505 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:50:04.311274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:50:04.330315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:50:04.342822 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:50:04.357259 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:50:04.368675 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:50:04.385618 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:50:04.396556 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:50:04.416635 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 00:50:04.430895 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:50:04.441719 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:50:04.457094 dracut-cmdline[249]: dracut-dracut-053 Mar 4 00:50:04.464655 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:50:04.491724 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:50:04.495961 systemd-resolved[254]: Positive Trust Anchors: Mar 4 00:50:04.495971 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:50:04.496002 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:50:04.498195 systemd-resolved[254]: Defaulting to hostname 'linux'. Mar 4 00:50:04.504570 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:50:04.509736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:50:04.604139 kernel: SCSI subsystem initialized Mar 4 00:50:04.610144 kernel: Loading iSCSI transport class v2.0-870. Mar 4 00:50:04.621147 kernel: iscsi: registered transport (tcp) Mar 4 00:50:04.638148 kernel: iscsi: registered transport (qla4xxx) Mar 4 00:50:04.638211 kernel: QLogic iSCSI HBA Driver Mar 4 00:50:04.672908 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 00:50:04.686250 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 00:50:04.721646 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 00:50:04.721711 kernel: device-mapper: uevent: version 1.0.3 Mar 4 00:50:04.727113 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 00:50:04.777141 kernel: raid6: neonx8 gen() 15802 MB/s Mar 4 00:50:04.794138 kernel: raid6: neonx4 gen() 15687 MB/s Mar 4 00:50:04.813134 kernel: raid6: neonx2 gen() 13246 MB/s Mar 4 00:50:04.833135 kernel: raid6: neonx1 gen() 10482 MB/s Mar 4 00:50:04.852130 kernel: raid6: int64x8 gen() 6982 MB/s Mar 4 00:50:04.872130 kernel: raid6: int64x4 gen() 7360 MB/s Mar 4 00:50:04.892131 kernel: raid6: int64x2 gen() 6145 MB/s Mar 4 00:50:04.915187 kernel: raid6: int64x1 gen() 5072 MB/s Mar 4 00:50:04.915207 kernel: raid6: using algorithm neonx8 gen() 15802 MB/s Mar 4 00:50:04.937799 kernel: raid6: .... xor() 12046 MB/s, rmw enabled Mar 4 00:50:04.937819 kernel: raid6: using neon recovery algorithm Mar 4 00:50:04.947632 kernel: xor: measuring software checksum speed Mar 4 00:50:04.947647 kernel: 8regs : 19821 MB/sec Mar 4 00:50:04.951274 kernel: 32regs : 19650 MB/sec Mar 4 00:50:04.954045 kernel: arm64_neon : 27061 MB/sec Mar 4 00:50:04.957213 kernel: xor: using function: arm64_neon (27061 MB/sec) Mar 4 00:50:05.007137 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 00:50:05.016982 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:50:05.030274 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:50:05.050952 systemd-udevd[437]: Using default interface naming scheme 'v255'. Mar 4 00:50:05.055479 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:50:05.075385 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 00:50:05.094509 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Mar 4 00:50:05.122841 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:50:05.141288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:50:05.177207 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:50:05.191488 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 00:50:05.218189 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 00:50:05.236982 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:50:05.251465 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:50:05.262336 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:50:05.277403 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 00:50:05.300502 kernel: hv_vmbus: Vmbus version:5.3 Mar 4 00:50:05.300526 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 4 00:50:05.300537 kernel: hv_vmbus: registering driver hid_hyperv Mar 4 00:50:05.308441 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 4 00:50:05.312160 kernel: hv_vmbus: registering driver hv_storvsc Mar 4 00:50:05.320371 kernel: scsi host0: storvsc_host_t Mar 4 00:50:05.320605 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 4 00:50:05.328150 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 4 00:50:05.328606 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:50:05.339797 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 4 00:50:05.359773 kernel: hv_vmbus: registering driver hv_netvsc Mar 4 00:50:05.359805 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 4 00:50:05.359816 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 4 00:50:05.359919 kernel: scsi host1: storvsc_host_t Mar 4 00:50:05.360358 kernel: PTP clock support registered Mar 4 00:50:05.364973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:50:05.378716 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 4 00:50:05.365190 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:50:05.390616 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:50:05.395847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:50:05.396037 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:50:05.406866 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:50:05.426493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:50:05.458342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:50:05.471366 kernel: hv_utils: Registering HyperV Utility Driver Mar 4 00:50:05.471392 kernel: hv_vmbus: registering driver hv_utils Mar 4 00:50:05.490212 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 4 00:50:05.490449 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 4 00:50:05.490553 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 4 00:50:05.490638 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 00:50:05.497264 kernel: hv_utils: Heartbeat IC version 3.0 Mar 4 00:50:05.497332 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 4 00:50:05.497516 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 4 00:50:05.494389 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:50:05.928353 kernel: hv_utils: Shutdown IC version 3.2 Mar 4 00:50:05.928379 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 4 00:50:05.928533 kernel: hv_utils: TimeSync IC version 4.0 Mar 4 00:50:05.928544 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 4 00:50:05.928635 kernel: hv_netvsc 002248c0-74aa-0022-48c0-74aa002248c0 eth0: VF slot 1 added Mar 4 00:50:05.928743 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:50:05.908439 systemd-resolved[254]: Clock change detected. Flushing caches. Mar 4 00:50:05.944642 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 4 00:50:05.944816 kernel: hv_vmbus: registering driver hv_pci Mar 4 00:50:05.954353 kernel: hv_pci a99cb832-ffe2-4b5b-80cf-f3e7ce393e09: PCI VMBus probing: Using version 0x10004 Mar 4 00:50:05.966398 kernel: hv_pci a99cb832-ffe2-4b5b-80cf-f3e7ce393e09: PCI host bridge to bus ffe2:00 Mar 4 00:50:05.979862 kernel: pci_bus ffe2:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 4 00:50:05.980001 kernel: pci_bus ffe2:00: No busn resource found for root bus, will use [bus 00-ff] Mar 4 00:50:05.980092 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#295 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:50:05.980183 kernel: pci ffe2:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 4 00:50:05.993955 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:50:06.014780 kernel: pci ffe2:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:50:06.014825 kernel: pci ffe2:00:02.0: enabling Extended Tags Mar 4 00:50:06.039423 kernel: pci ffe2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ffe2:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 4 00:50:06.039641 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#277 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:50:06.039737 kernel: pci_bus ffe2:00: busn_res: [bus 00-ff] end is updated to 00 Mar 4 00:50:06.050651 kernel: pci ffe2:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:50:06.091915 kernel: mlx5_core ffe2:00:02.0: enabling device (0000 -> 0002) Mar 4 00:50:06.098319 kernel: mlx5_core ffe2:00:02.0: firmware version: 16.30.5026 Mar 4 00:50:06.302620 kernel: hv_netvsc 002248c0-74aa-0022-48c0-74aa002248c0 eth0: VF registering: eth1 Mar 4 00:50:06.302822 kernel: mlx5_core ffe2:00:02.0 eth1: joined to eth0 Mar 4 00:50:06.309368 kernel: mlx5_core ffe2:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 4 00:50:06.319330 kernel: mlx5_core ffe2:00:02.0 enP65506s1: renamed from eth1 Mar 4 00:50:06.547323 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (484) Mar 4 00:50:06.566334 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 00:50:06.587753 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 4 00:50:06.603980 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 4 00:50:06.659318 kernel: BTRFS: device fsid aea7b15d-9414-4172-952e-52d0c2e5c89d devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (501) Mar 4 00:50:06.673632 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 4 00:50:06.678954 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 4 00:50:06.705507 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 00:50:06.730326 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:50:06.738326 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:50:07.741122 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:50:07.741185 disk-uuid[606]: The operation has completed successfully. Mar 4 00:50:07.810628 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 00:50:07.814480 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 00:50:07.843505 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 00:50:07.854863 sh[692]: Success Mar 4 00:50:07.886341 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 4 00:50:08.166659 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 00:50:08.172532 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 00:50:08.185435 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 00:50:08.214302 kernel: BTRFS info (device dm-0): first mount of filesystem aea7b15d-9414-4172-952e-52d0c2e5c89d Mar 4 00:50:08.214359 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:50:08.220090 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 00:50:08.224295 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 00:50:08.227968 kernel: BTRFS info (device dm-0): using free space tree Mar 4 00:50:08.568493 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 00:50:08.572677 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 00:50:08.590552 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 00:50:08.598499 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 00:50:08.633793 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:50:08.633845 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:50:08.637540 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:50:08.680561 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:50:08.699340 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:50:08.701529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:50:08.730323 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:50:08.734090 systemd-networkd[866]: lo: Link UP Mar 4 00:50:08.735348 systemd-networkd[866]: lo: Gained carrier Mar 4 00:50:08.738387 systemd-networkd[866]: Enumeration completed Mar 4 00:50:08.739096 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:50:08.742529 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:50:08.742533 systemd-networkd[866]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:50:08.750333 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 00:50:08.756782 systemd[1]: Reached target network.target - Network. Mar 4 00:50:08.785570 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 00:50:08.843325 kernel: mlx5_core ffe2:00:02.0 enP65506s1: Link up Mar 4 00:50:08.884465 kernel: hv_netvsc 002248c0-74aa-0022-48c0-74aa002248c0 eth0: Data path switched to VF: enP65506s1 Mar 4 00:50:08.884532 systemd-networkd[866]: enP65506s1: Link UP Mar 4 00:50:08.884612 systemd-networkd[866]: eth0: Link UP Mar 4 00:50:08.884736 systemd-networkd[866]: eth0: Gained carrier Mar 4 00:50:08.884744 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:50:08.905513 systemd-networkd[866]: enP65506s1: Gained carrier Mar 4 00:50:08.920354 systemd-networkd[866]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:50:09.857546 ignition[877]: Ignition 2.19.0 Mar 4 00:50:09.857557 ignition[877]: Stage: fetch-offline Mar 4 00:50:09.857597 ignition[877]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:09.861512 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:50:09.857604 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:09.880448 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 4 00:50:09.857705 ignition[877]: parsed url from cmdline: "" Mar 4 00:50:09.857708 ignition[877]: no config URL provided Mar 4 00:50:09.857713 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:50:09.857720 ignition[877]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:50:09.857724 ignition[877]: failed to fetch config: resource requires networking Mar 4 00:50:09.860509 ignition[877]: Ignition finished successfully Mar 4 00:50:09.900201 ignition[885]: Ignition 2.19.0 Mar 4 00:50:09.900208 ignition[885]: Stage: fetch Mar 4 00:50:09.900395 ignition[885]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:09.900405 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:09.900498 ignition[885]: parsed url from cmdline: "" Mar 4 00:50:09.900501 ignition[885]: no config URL provided Mar 4 00:50:09.900506 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:50:09.900512 ignition[885]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:50:09.900533 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 4 00:50:09.993408 systemd-networkd[866]: eth0: Gained IPv6LL Mar 4 00:50:10.035171 ignition[885]: GET result: OK Mar 4 00:50:10.035264 ignition[885]: config has been read from IMDS userdata Mar 4 00:50:10.035339 ignition[885]: parsing config with SHA512: 99be2b9930bbec9c9f380d896fc110ff8b30666d195ca907bc22e27d4473d706a142c6601bfd574f686590a23b8c2e4f77dd58bba88193cb70b387bd083d0003 Mar 4 00:50:10.039503 unknown[885]: fetched base config from "system" Mar 4 00:50:10.039511 unknown[885]: fetched base config from "system" Mar 4 00:50:10.041711 ignition[885]: fetch: fetch complete Mar 4 00:50:10.039517 unknown[885]: fetched user config from "azure" Mar 4 00:50:10.041716 ignition[885]: fetch: fetch passed Mar 4 00:50:10.043492 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 4 00:50:10.041778 ignition[885]: Ignition finished successfully Mar 4 00:50:10.061579 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 00:50:10.081143 ignition[891]: Ignition 2.19.0 Mar 4 00:50:10.081152 ignition[891]: Stage: kargs Mar 4 00:50:10.086618 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 00:50:10.081340 ignition[891]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:10.081349 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:10.082269 ignition[891]: kargs: kargs passed Mar 4 00:50:10.082333 ignition[891]: Ignition finished successfully Mar 4 00:50:10.107586 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 00:50:10.125897 ignition[897]: Ignition 2.19.0 Mar 4 00:50:10.125906 ignition[897]: Stage: disks Mar 4 00:50:10.130110 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 00:50:10.126077 ignition[897]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:10.136642 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 00:50:10.126086 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:10.145535 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 00:50:10.126993 ignition[897]: disks: disks passed Mar 4 00:50:10.154724 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:50:10.127036 ignition[897]: Ignition finished successfully Mar 4 00:50:10.163897 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:50:10.173199 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:50:10.190560 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 00:50:10.278370 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 4 00:50:10.286564 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 00:50:10.301537 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 00:50:10.362322 kernel: EXT4-fs (sda9): mounted filesystem e47fe8fd-dacc-429e-aef1-b03916169c3c r/w with ordered data mode. Quota mode: none. Mar 4 00:50:10.363248 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 00:50:10.367046 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 00:50:10.414409 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:50:10.444791 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (916) Mar 4 00:50:10.444853 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:50:10.444866 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:50:10.448465 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:50:10.449508 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 00:50:10.456496 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 4 00:50:10.474334 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:50:10.475853 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 00:50:10.475898 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:50:10.482647 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:50:10.495396 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 00:50:10.516590 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 00:50:11.168648 coreos-metadata[931]: Mar 04 00:50:11.168 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 00:50:11.177076 coreos-metadata[931]: Mar 04 00:50:11.177 INFO Fetch successful Mar 4 00:50:11.181538 coreos-metadata[931]: Mar 04 00:50:11.181 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 4 00:50:11.200025 coreos-metadata[931]: Mar 04 00:50:11.200 INFO Fetch successful Mar 4 00:50:11.221666 coreos-metadata[931]: Mar 04 00:50:11.221 INFO wrote hostname ci-4081.3.6-n-4860195aa5 to /sysroot/etc/hostname Mar 4 00:50:11.230088 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:50:11.402882 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 00:50:11.447808 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Mar 4 00:50:11.456359 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 00:50:11.464168 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 00:50:12.638245 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 00:50:12.650510 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 00:50:12.656724 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 00:50:12.677843 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 00:50:12.681680 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:50:12.706204 ignition[1035]: INFO : Ignition 2.19.0 Mar 4 00:50:12.706204 ignition[1035]: INFO : Stage: mount Mar 4 00:50:12.721714 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:12.721714 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:12.721714 ignition[1035]: INFO : mount: mount passed Mar 4 00:50:12.721714 ignition[1035]: INFO : Ignition finished successfully Mar 4 00:50:12.709196 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 00:50:12.714793 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 00:50:12.737445 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 00:50:12.751550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:50:12.779322 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1046) Mar 4 00:50:12.790074 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:50:12.790117 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:50:12.793379 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:50:12.801335 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:50:12.801690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:50:12.826984 ignition[1063]: INFO : Ignition 2.19.0 Mar 4 00:50:12.826984 ignition[1063]: INFO : Stage: files Mar 4 00:50:12.833120 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:12.833120 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:12.833120 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Mar 4 00:50:12.847880 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 00:50:12.847880 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 00:50:12.982960 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 00:50:12.988761 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 00:50:12.988761 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 00:50:12.983626 unknown[1063]: wrote ssh authorized keys file for user: core Mar 4 00:50:13.003747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 00:50:13.003747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 00:50:13.003747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:50:13.003747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 4 00:50:13.057921 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 4 00:50:13.261211 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:50:13.261211 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:50:13.278022 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 4 00:50:13.694717 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 4 00:50:14.152660 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:50:14.152660 ignition[1063]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 4 00:50:14.262530 ignition[1063]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:50:14.273347 ignition[1063]: INFO : files: files passed Mar 4 00:50:14.273347 ignition[1063]: INFO : Ignition finished successfully Mar 4 00:50:14.274577 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 00:50:14.300035 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 00:50:14.312525 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 00:50:14.390708 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:50:14.329789 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 00:50:14.405400 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:50:14.405400 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:50:14.329899 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 00:50:14.379670 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:50:14.385934 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 00:50:14.406579 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 00:50:14.450782 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 00:50:14.450941 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 00:50:14.460052 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 00:50:14.468972 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 00:50:14.477047 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 00:50:14.489566 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 00:50:14.501927 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:50:14.516425 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 00:50:14.532138 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:50:14.537819 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:50:14.547378 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 00:50:14.555766 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 00:50:14.555942 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:50:14.568279 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 00:50:14.577593 systemd[1]: Stopped target basic.target - Basic System. Mar 4 00:50:14.585827 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 00:50:14.594282 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:50:14.603529 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 00:50:14.613063 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 00:50:14.621719 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:50:14.631133 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 00:50:14.640560 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 00:50:14.648804 systemd[1]: Stopped target swap.target - Swaps. Mar 4 00:50:14.656620 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 00:50:14.656795 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:50:14.667985 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:50:14.676803 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:50:14.686059 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 00:50:14.686165 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:50:14.696017 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 00:50:14.696181 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 00:50:14.709861 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 00:50:14.710033 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:50:14.719274 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 00:50:14.719437 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 00:50:14.727616 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 4 00:50:14.727766 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:50:14.757432 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 00:50:14.764890 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 00:50:14.765122 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:50:14.786410 ignition[1116]: INFO : Ignition 2.19.0 Mar 4 00:50:14.786410 ignition[1116]: INFO : Stage: umount Mar 4 00:50:14.786410 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:50:14.786410 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:50:14.786410 ignition[1116]: INFO : umount: umount passed Mar 4 00:50:14.786410 ignition[1116]: INFO : Ignition finished successfully Mar 4 00:50:14.793457 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 00:50:14.800284 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 00:50:14.800550 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:50:14.805812 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 00:50:14.805961 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:50:14.820752 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 00:50:14.821489 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 00:50:14.821593 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 00:50:14.833532 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 00:50:14.833638 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 00:50:14.840793 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 00:50:14.840847 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 00:50:14.847726 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 00:50:14.847770 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 00:50:14.856147 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 4 00:50:14.856198 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 4 00:50:14.864006 systemd[1]: Stopped target network.target - Network. Mar 4 00:50:14.871652 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 00:50:14.871703 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:50:14.882452 systemd[1]: Stopped target paths.target - Path Units. Mar 4 00:50:14.890067 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 00:50:14.893326 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:50:14.899480 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 00:50:14.906985 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 00:50:14.914699 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 00:50:14.914746 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:50:14.924393 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 00:50:14.924444 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:50:14.934517 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 00:50:14.934565 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 00:50:14.943839 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 00:50:14.943880 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 00:50:14.952012 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 00:50:14.959850 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 00:50:14.972340 systemd-networkd[866]: eth0: DHCPv6 lease lost Mar 4 00:50:14.973702 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 00:50:14.973939 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 00:50:14.983754 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 00:50:14.984161 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 00:50:14.992287 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 00:50:15.156503 kernel: hv_netvsc 002248c0-74aa-0022-48c0-74aa002248c0 eth0: Data path switched from VF: enP65506s1 Mar 4 00:50:14.992353 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:50:15.017516 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 00:50:15.027057 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 00:50:15.027124 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:50:15.036001 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 00:50:15.036050 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:50:15.044081 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 00:50:15.044129 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 00:50:15.053028 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 00:50:15.053074 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:50:15.061914 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:50:15.094664 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 00:50:15.094848 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:50:15.104985 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 00:50:15.105051 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 00:50:15.113634 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 00:50:15.113673 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:50:15.121002 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 00:50:15.121047 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:50:15.133545 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 00:50:15.133596 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 00:50:15.152806 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:50:15.152886 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:50:15.185485 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 00:50:15.195375 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 00:50:15.195444 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:50:15.206449 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:50:15.206494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:50:15.216364 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 00:50:15.216475 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 00:50:15.224768 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 00:50:15.224859 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 00:50:15.712575 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 00:50:15.712693 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 00:50:15.716915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 00:50:15.724607 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 00:50:15.724671 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 00:50:15.745603 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 00:50:15.921411 systemd[1]: Switching root. Mar 4 00:50:15.949629 systemd-journald[217]: Journal stopped Mar 4 00:50:21.068943 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 4 00:50:21.068968 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 00:50:21.068978 kernel: SELinux: policy capability open_perms=1 Mar 4 00:50:21.068989 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 00:50:21.068997 kernel: SELinux: policy capability always_check_network=0 Mar 4 00:50:21.069004 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 00:50:21.069013 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 00:50:21.069021 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 00:50:21.069029 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 00:50:21.069038 systemd[1]: Successfully loaded SELinux policy in 161.492ms. Mar 4 00:50:21.069049 kernel: audit: type=1403 audit(1772585417.774:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 00:50:21.069058 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.029ms. Mar 4 00:50:21.069068 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:50:21.069078 systemd[1]: Detected virtualization microsoft. Mar 4 00:50:21.069087 systemd[1]: Detected architecture arm64. Mar 4 00:50:21.069098 systemd[1]: Detected first boot. Mar 4 00:50:21.069107 systemd[1]: Hostname set to . Mar 4 00:50:21.069116 systemd[1]: Initializing machine ID from random generator. Mar 4 00:50:21.069127 zram_generator::config[1176]: No configuration found. Mar 4 00:50:21.069137 systemd[1]: Populated /etc with preset unit settings. Mar 4 00:50:21.069146 systemd[1]: Queued start job for default target multi-user.target. Mar 4 00:50:21.069157 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 4 00:50:21.069166 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 00:50:21.069176 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 00:50:21.069185 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 00:50:21.069194 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 00:50:21.069204 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 00:50:21.069214 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 00:50:21.069224 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 00:50:21.069234 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 00:50:21.069243 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:50:21.069252 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:50:21.069262 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 00:50:21.069271 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 00:50:21.069280 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 00:50:21.069290 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:50:21.069299 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 4 00:50:21.069318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:50:21.069329 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 00:50:21.069340 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:50:21.069351 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:50:21.069361 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:50:21.069370 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:50:21.069380 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 00:50:21.069391 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 00:50:21.069401 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 00:50:21.069410 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 00:50:21.069419 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:50:21.069429 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:50:21.069438 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:50:21.069448 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 00:50:21.069460 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 00:50:21.069469 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 00:50:21.069479 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 00:50:21.069488 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 00:50:21.069498 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 00:50:21.069507 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 00:50:21.069518 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 00:50:21.069528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:50:21.069538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:50:21.069549 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 00:50:21.069558 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:50:21.069568 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 00:50:21.069577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:50:21.069586 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 00:50:21.069596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:50:21.069607 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 00:50:21.069617 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 4 00:50:21.069627 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 4 00:50:21.069637 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:50:21.069646 kernel: fuse: init (API version 7.39) Mar 4 00:50:21.069654 kernel: loop: module loaded Mar 4 00:50:21.069679 systemd-journald[1286]: Collecting audit messages is disabled. Mar 4 00:50:21.069701 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:50:21.069711 systemd-journald[1286]: Journal started Mar 4 00:50:21.069731 systemd-journald[1286]: Runtime Journal (/run/log/journal/a65ed367569a40f89eeb6b30d3fd6287) is 8.0M, max 78.5M, 70.5M free. Mar 4 00:50:21.087996 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 00:50:21.101087 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 00:50:21.101149 kernel: ACPI: bus type drm_connector registered Mar 4 00:50:21.123449 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:50:21.133935 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:50:21.135174 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 00:50:21.141741 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 00:50:21.146736 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 00:50:21.151170 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 00:50:21.155904 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 00:50:21.160672 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 00:50:21.164885 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 00:50:21.170638 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:50:21.176279 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 00:50:21.176450 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 00:50:21.181879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:50:21.182037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:50:21.187096 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 00:50:21.187243 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 00:50:21.192048 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:50:21.192200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:50:21.197817 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 00:50:21.197965 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 00:50:21.202993 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:50:21.203180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:50:21.208273 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 00:50:21.214376 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:50:21.219702 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 00:50:21.235549 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:50:21.242974 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 00:50:21.252388 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 00:50:21.258452 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 00:50:21.263780 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 00:50:21.269783 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 00:50:21.277458 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 00:50:21.282423 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 00:50:21.283611 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 00:50:21.288182 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 00:50:21.289378 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:50:21.295473 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:50:21.309499 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 00:50:21.317112 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 00:50:21.322879 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 00:50:21.329795 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 00:50:21.342938 systemd-journald[1286]: Time spent on flushing to /var/log/journal/a65ed367569a40f89eeb6b30d3fd6287 is 15.817ms for 879 entries. Mar 4 00:50:21.342938 systemd-journald[1286]: System Journal (/var/log/journal/a65ed367569a40f89eeb6b30d3fd6287) is 8.0M, max 2.6G, 2.6G free. Mar 4 00:50:21.388728 systemd-journald[1286]: Received client request to flush runtime journal. Mar 4 00:50:21.347617 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 00:50:21.353672 udevadm[1336]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 4 00:50:21.391672 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 00:50:21.424806 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:50:21.455314 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Mar 4 00:50:21.455328 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Mar 4 00:50:21.461701 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:50:21.474612 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 00:50:21.631618 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 00:50:21.644503 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:50:21.661269 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Mar 4 00:50:21.661651 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Mar 4 00:50:21.667709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:50:22.032220 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 00:50:22.041523 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:50:22.067140 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Mar 4 00:50:22.254197 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:50:22.268964 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:50:22.326601 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 00:50:22.337200 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Mar 4 00:50:22.397151 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 00:50:22.428462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#236 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:50:22.444320 kernel: hv_vmbus: registering driver hv_balloon Mar 4 00:50:22.444417 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 00:50:22.460251 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 4 00:50:22.460348 kernel: hv_vmbus: registering driver hyperv_fb Mar 4 00:50:22.460370 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 4 00:50:22.481325 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 4 00:50:22.481422 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 4 00:50:22.481441 kernel: Console: switching to colour dummy device 80x25 Mar 4 00:50:22.485333 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 00:50:22.538548 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:50:22.554915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:50:22.555202 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:50:22.559172 systemd-networkd[1371]: lo: Link UP Mar 4 00:50:22.559176 systemd-networkd[1371]: lo: Gained carrier Mar 4 00:50:22.561005 systemd-networkd[1371]: Enumeration completed Mar 4 00:50:22.561377 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:50:22.561389 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:50:22.562183 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:50:22.578845 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 00:50:22.589457 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1373) Mar 4 00:50:22.597904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:50:22.640325 kernel: mlx5_core ffe2:00:02.0 enP65506s1: Link up Mar 4 00:50:22.666720 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 00:50:22.668324 kernel: hv_netvsc 002248c0-74aa-0022-48c0-74aa002248c0 eth0: Data path switched to VF: enP65506s1 Mar 4 00:50:22.672746 systemd-networkd[1371]: enP65506s1: Link UP Mar 4 00:50:22.673388 systemd-networkd[1371]: eth0: Link UP Mar 4 00:50:22.673396 systemd-networkd[1371]: eth0: Gained carrier Mar 4 00:50:22.673466 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:50:22.678595 systemd-networkd[1371]: enP65506s1: Gained carrier Mar 4 00:50:22.689362 systemd-networkd[1371]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:50:22.744769 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 00:50:22.755416 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 00:50:22.857596 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 00:50:22.885713 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 00:50:22.891813 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:50:22.901446 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 00:50:22.905926 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 00:50:22.935639 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 00:50:22.941529 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 00:50:22.947032 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 00:50:22.947153 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:50:22.951557 systemd[1]: Reached target machines.target - Containers. Mar 4 00:50:22.957458 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 00:50:22.968452 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 00:50:22.974603 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 00:50:22.978920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:50:22.979865 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 00:50:22.986505 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 00:50:22.996156 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 00:50:23.001933 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 00:50:23.054328 kernel: loop0: detected capacity change from 0 to 31320 Mar 4 00:50:23.087674 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 00:50:23.102002 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 00:50:23.102939 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 00:50:23.163655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:50:23.488329 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 00:50:23.560537 kernel: loop1: detected capacity change from 0 to 114432 Mar 4 00:50:23.915323 kernel: loop2: detected capacity change from 0 to 209336 Mar 4 00:50:23.979323 kernel: loop3: detected capacity change from 0 to 114328 Mar 4 00:50:24.324330 kernel: loop4: detected capacity change from 0 to 31320 Mar 4 00:50:24.340425 kernel: loop5: detected capacity change from 0 to 114432 Mar 4 00:50:24.355323 kernel: loop6: detected capacity change from 0 to 209336 Mar 4 00:50:24.371486 kernel: loop7: detected capacity change from 0 to 114328 Mar 4 00:50:24.379726 (sd-merge)[1479]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 4 00:50:24.380159 (sd-merge)[1479]: Merged extensions into '/usr'. Mar 4 00:50:24.392208 systemd[1]: Reloading requested from client PID 1462 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 00:50:24.392222 systemd[1]: Reloading... Mar 4 00:50:24.394389 systemd-networkd[1371]: eth0: Gained IPv6LL Mar 4 00:50:24.447329 zram_generator::config[1506]: No configuration found. Mar 4 00:50:24.582697 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:50:24.654589 systemd[1]: Reloading finished in 262 ms. Mar 4 00:50:24.671209 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 00:50:24.677790 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 00:50:24.693456 systemd[1]: Starting ensure-sysext.service... Mar 4 00:50:24.701512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:50:24.711038 systemd[1]: Reloading requested from client PID 1570 ('systemctl') (unit ensure-sysext.service)... Mar 4 00:50:24.711053 systemd[1]: Reloading... Mar 4 00:50:24.732045 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 00:50:24.732327 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 00:50:24.732973 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 00:50:24.733183 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Mar 4 00:50:24.733227 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Mar 4 00:50:24.736894 systemd-tmpfiles[1571]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 00:50:24.736906 systemd-tmpfiles[1571]: Skipping /boot Mar 4 00:50:24.746558 systemd-tmpfiles[1571]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 00:50:24.746569 systemd-tmpfiles[1571]: Skipping /boot Mar 4 00:50:24.801386 zram_generator::config[1601]: No configuration found. Mar 4 00:50:24.910414 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:50:24.983752 systemd[1]: Reloading finished in 271 ms. Mar 4 00:50:24.998628 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:50:25.016553 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 00:50:25.028546 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 00:50:25.036479 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 00:50:25.045001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:50:25.057585 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 00:50:25.066812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:50:25.070920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:50:25.078593 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:50:25.086556 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:50:25.096374 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:50:25.102732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:50:25.102895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:50:25.111508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:50:25.111980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:50:25.118196 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:50:25.118572 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:50:25.136187 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 00:50:25.145631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:50:25.151542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:50:25.158399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:50:25.166761 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:50:25.177479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:50:25.180715 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:50:25.180893 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:50:25.183404 systemd-resolved[1669]: Positive Trust Anchors: Mar 4 00:50:25.183415 systemd-resolved[1669]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:50:25.183447 systemd-resolved[1669]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:50:25.188498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:50:25.188668 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:50:25.194790 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:50:25.196597 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:50:25.206073 augenrules[1702]: No rules Mar 4 00:50:25.205998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:50:25.211644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:50:25.219579 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 00:50:25.220587 systemd-resolved[1669]: Using system hostname 'ci-4081.3.6-n-4860195aa5'. Mar 4 00:50:25.234571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:50:25.243743 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:50:25.252685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:50:25.253027 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 00:50:25.261752 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:50:25.268556 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 00:50:25.274170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:50:25.274344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:50:25.280192 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 00:50:25.280894 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 00:50:25.286927 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:50:25.287161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:50:25.294054 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:50:25.294474 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:50:25.303959 systemd[1]: Finished ensure-sysext.service. Mar 4 00:50:25.310985 systemd[1]: Reached target network.target - Network. Mar 4 00:50:25.314858 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 00:50:25.319787 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:50:25.325429 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 00:50:25.325502 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 00:50:25.325933 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 00:50:25.873864 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 00:50:25.880434 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 00:50:29.171153 ldconfig[1458]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 00:50:29.182274 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 00:50:29.193457 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 00:50:29.206959 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 00:50:29.212756 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:50:29.217964 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 00:50:29.223031 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 00:50:29.228574 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 00:50:29.233132 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 00:50:29.238252 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 00:50:29.243389 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 00:50:29.243425 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:50:29.247267 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:50:29.252112 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 00:50:29.258253 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 00:50:29.263691 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 00:50:29.268730 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 00:50:29.273363 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:50:29.277331 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:50:29.281497 systemd[1]: System is tainted: cgroupsv1 Mar 4 00:50:29.281540 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 00:50:29.281558 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 00:50:29.292382 systemd[1]: Starting chronyd.service - NTP client/server... Mar 4 00:50:29.299429 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 00:50:29.309568 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 4 00:50:29.327490 (chronyd)[1741]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 4 00:50:29.329549 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 00:50:29.336592 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 00:50:29.342698 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 00:50:29.347866 jq[1746]: false Mar 4 00:50:29.351514 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 00:50:29.351568 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 4 00:50:29.355524 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 4 00:50:29.362698 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 4 00:50:29.366923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:50:29.367956 KVP[1751]: KVP starting; pid is:1751 Mar 4 00:50:29.372201 chronyd[1755]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 4 00:50:29.376499 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 00:50:29.383500 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 00:50:29.389428 KVP[1751]: KVP LIC Version: 3.1 Mar 4 00:50:29.390328 kernel: hv_utils: KVP IC version 4.0 Mar 4 00:50:29.395449 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 00:50:29.403509 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 00:50:29.414245 extend-filesystems[1749]: Found loop4 Mar 4 00:50:29.421632 extend-filesystems[1749]: Found loop5 Mar 4 00:50:29.421632 extend-filesystems[1749]: Found loop6 Mar 4 00:50:29.421632 extend-filesystems[1749]: Found loop7 Mar 4 00:50:29.421632 extend-filesystems[1749]: Found sda Mar 4 00:50:29.421632 extend-filesystems[1749]: Found sda1 Mar 4 00:50:29.421632 extend-filesystems[1749]: Found sda2 Mar 4 00:50:29.421632 extend-filesystems[1749]: Found sda3 Mar 4 00:50:29.421632 extend-filesystems[1749]: Found usr Mar 4 00:50:29.421632 extend-filesystems[1749]: Found sda4 Mar 4 00:50:29.421632 extend-filesystems[1749]: Found sda6 Mar 4 00:50:29.421632 extend-filesystems[1749]: Found sda7 Mar 4 00:50:29.421632 extend-filesystems[1749]: Found sda9 Mar 4 00:50:29.421632 extend-filesystems[1749]: Checking size of /dev/sda9 Mar 4 00:50:29.415021 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 00:50:29.586676 extend-filesystems[1749]: Old size kept for /dev/sda9 Mar 4 00:50:29.586676 extend-filesystems[1749]: Found sr0 Mar 4 00:50:29.460608 chronyd[1755]: Timezone right/UTC failed leap second check, ignoring Mar 4 00:50:29.443629 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 00:50:29.460814 chronyd[1755]: Loaded seccomp filter (level 2) Mar 4 00:50:29.452832 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 00:50:29.567854 dbus-daemon[1745]: [system] SELinux support is enabled Mar 4 00:50:29.458065 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 00:50:29.492424 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 00:50:29.619031 update_engine[1774]: I20260304 00:50:29.567190 1774 main.cc:92] Flatcar Update Engine starting Mar 4 00:50:29.619031 update_engine[1774]: I20260304 00:50:29.592823 1774 update_check_scheduler.cc:74] Next update check in 6m57s Mar 4 00:50:29.515653 systemd[1]: Started chronyd.service - NTP client/server. Mar 4 00:50:29.619338 jq[1784]: true Mar 4 00:50:29.540108 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 00:50:29.541365 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 00:50:29.541651 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 00:50:29.541869 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 00:50:29.563741 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 00:50:29.563980 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 00:50:29.580815 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 00:50:29.604000 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 00:50:29.604945 systemd-logind[1769]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 4 00:50:29.605133 systemd-logind[1769]: New seat seat0. Mar 4 00:50:29.614344 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 00:50:29.666426 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1799) Mar 4 00:50:29.666917 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 00:50:29.667202 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 00:50:29.707299 coreos-metadata[1744]: Mar 04 00:50:29.702 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 00:50:29.717503 coreos-metadata[1744]: Mar 04 00:50:29.714 INFO Fetch successful Mar 4 00:50:29.717503 coreos-metadata[1744]: Mar 04 00:50:29.714 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 4 00:50:29.713723 (ntainerd)[1831]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 00:50:29.720040 dbus-daemon[1745]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 4 00:50:29.721296 systemd[1]: Started update-engine.service - Update Engine. Mar 4 00:50:29.728645 coreos-metadata[1744]: Mar 04 00:50:29.728 INFO Fetch successful Mar 4 00:50:29.728928 coreos-metadata[1744]: Mar 04 00:50:29.728 INFO Fetching http://168.63.129.16/machine/ea31fd98-c7bc-4d38-b649-c6d13916aade/442d8ef1%2D690f%2D4e14%2Dbdf5%2D6af95dccaa8b.%5Fci%2D4081.3.6%2Dn%2D4860195aa5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 4 00:50:29.733803 jq[1830]: true Mar 4 00:50:29.736584 coreos-metadata[1744]: Mar 04 00:50:29.734 INFO Fetch successful Mar 4 00:50:29.736584 coreos-metadata[1744]: Mar 04 00:50:29.734 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 4 00:50:29.748871 coreos-metadata[1744]: Mar 04 00:50:29.748 INFO Fetch successful Mar 4 00:50:29.774678 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 00:50:29.774893 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 00:50:29.783214 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 00:50:29.783347 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 00:50:29.790595 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 00:50:29.798611 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 00:50:29.816484 tar[1814]: linux-arm64/LICENSE Mar 4 00:50:29.820374 tar[1814]: linux-arm64/helm Mar 4 00:50:29.858937 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 4 00:50:29.868170 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 00:50:29.888426 bash[1870]: Updated "/home/core/.ssh/authorized_keys" Mar 4 00:50:29.892894 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 00:50:29.905883 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 4 00:50:30.126896 locksmithd[1866]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 00:50:30.323648 sshd_keygen[1773]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 00:50:30.357213 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 00:50:30.378960 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 00:50:30.392587 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 4 00:50:30.400525 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 00:50:30.400763 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 00:50:30.417820 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 00:50:30.451434 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 00:50:30.471203 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 4 00:50:30.493667 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 00:50:30.511759 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 4 00:50:30.517346 tar[1814]: linux-arm64/README.md Mar 4 00:50:30.522416 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 00:50:30.541832 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 00:50:30.569243 containerd[1831]: time="2026-03-04T00:50:30.569149860Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 00:50:30.601006 containerd[1831]: time="2026-03-04T00:50:30.600955540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:50:30.602567 containerd[1831]: time="2026-03-04T00:50:30.602532620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:50:30.603325 containerd[1831]: time="2026-03-04T00:50:30.602667500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 00:50:30.603325 containerd[1831]: time="2026-03-04T00:50:30.602695900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 00:50:30.603325 containerd[1831]: time="2026-03-04T00:50:30.602852220Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 00:50:30.603325 containerd[1831]: time="2026-03-04T00:50:30.602871660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 00:50:30.603325 containerd[1831]: time="2026-03-04T00:50:30.602935900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:50:30.603325 containerd[1831]: time="2026-03-04T00:50:30.602948380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:50:30.603325 containerd[1831]: time="2026-03-04T00:50:30.603161380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:50:30.603325 containerd[1831]: time="2026-03-04T00:50:30.603175980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 00:50:30.603325 containerd[1831]: time="2026-03-04T00:50:30.603189100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:50:30.603325 containerd[1831]: time="2026-03-04T00:50:30.603198500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 00:50:30.603325 containerd[1831]: time="2026-03-04T00:50:30.603266180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:50:30.603770 containerd[1831]: time="2026-03-04T00:50:30.603748140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:50:30.604000 containerd[1831]: time="2026-03-04T00:50:30.603981620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:50:30.604062 containerd[1831]: time="2026-03-04T00:50:30.604049940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 00:50:30.604209 containerd[1831]: time="2026-03-04T00:50:30.604192300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 00:50:30.604323 containerd[1831]: time="2026-03-04T00:50:30.604293340Z" level=info msg="metadata content store policy set" policy=shared Mar 4 00:50:30.618003 containerd[1831]: time="2026-03-04T00:50:30.617966860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 00:50:30.618190 containerd[1831]: time="2026-03-04T00:50:30.618175420Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 00:50:30.618421 containerd[1831]: time="2026-03-04T00:50:30.618381180Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 00:50:30.618487 containerd[1831]: time="2026-03-04T00:50:30.618427740Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 00:50:30.618487 containerd[1831]: time="2026-03-04T00:50:30.618447260Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 00:50:30.618654 containerd[1831]: time="2026-03-04T00:50:30.618635500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 00:50:30.620842 containerd[1831]: time="2026-03-04T00:50:30.620810700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 00:50:30.621024 containerd[1831]: time="2026-03-04T00:50:30.621005900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 00:50:30.621288 containerd[1831]: time="2026-03-04T00:50:30.621028460Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 00:50:30.621364 containerd[1831]: time="2026-03-04T00:50:30.621294140Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 00:50:30.621364 containerd[1831]: time="2026-03-04T00:50:30.621329420Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 00:50:30.621364 containerd[1831]: time="2026-03-04T00:50:30.621349220Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 00:50:30.621424 containerd[1831]: time="2026-03-04T00:50:30.621375660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 00:50:30.621424 containerd[1831]: time="2026-03-04T00:50:30.621390420Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 00:50:30.621424 containerd[1831]: time="2026-03-04T00:50:30.621406380Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 00:50:30.621424 containerd[1831]: time="2026-03-04T00:50:30.621420820Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 00:50:30.621496 containerd[1831]: time="2026-03-04T00:50:30.621434060Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 00:50:30.621496 containerd[1831]: time="2026-03-04T00:50:30.621446980Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 00:50:30.621496 containerd[1831]: time="2026-03-04T00:50:30.621468420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621496 containerd[1831]: time="2026-03-04T00:50:30.621482700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621496 containerd[1831]: time="2026-03-04T00:50:30.621495700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621589 containerd[1831]: time="2026-03-04T00:50:30.621510420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621589 containerd[1831]: time="2026-03-04T00:50:30.621523020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621589 containerd[1831]: time="2026-03-04T00:50:30.621535820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621675 containerd[1831]: time="2026-03-04T00:50:30.621547620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621707 containerd[1831]: time="2026-03-04T00:50:30.621687100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621734 containerd[1831]: time="2026-03-04T00:50:30.621706020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621734 containerd[1831]: time="2026-03-04T00:50:30.621724340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621769 containerd[1831]: time="2026-03-04T00:50:30.621744740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621769 containerd[1831]: time="2026-03-04T00:50:30.621757260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621811 containerd[1831]: time="2026-03-04T00:50:30.621769180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621811 containerd[1831]: time="2026-03-04T00:50:30.621785860Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 00:50:30.621848 containerd[1831]: time="2026-03-04T00:50:30.621812700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621848 containerd[1831]: time="2026-03-04T00:50:30.621826740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.621848 containerd[1831]: time="2026-03-04T00:50:30.621838020Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 00:50:30.622025 containerd[1831]: time="2026-03-04T00:50:30.621894660Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 00:50:30.622296 containerd[1831]: time="2026-03-04T00:50:30.622272100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 00:50:30.622296 containerd[1831]: time="2026-03-04T00:50:30.622294180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 00:50:30.622365 containerd[1831]: time="2026-03-04T00:50:30.622327700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 00:50:30.622365 containerd[1831]: time="2026-03-04T00:50:30.622338980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.622365 containerd[1831]: time="2026-03-04T00:50:30.622356260Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 00:50:30.622420 containerd[1831]: time="2026-03-04T00:50:30.622367180Z" level=info msg="NRI interface is disabled by configuration." Mar 4 00:50:30.622420 containerd[1831]: time="2026-03-04T00:50:30.622382540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 00:50:30.622841 containerd[1831]: time="2026-03-04T00:50:30.622775140Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 00:50:30.622957 containerd[1831]: time="2026-03-04T00:50:30.622928540Z" level=info msg="Connect containerd service" Mar 4 00:50:30.622988 containerd[1831]: time="2026-03-04T00:50:30.622976740Z" level=info msg="using legacy CRI server" Mar 4 00:50:30.622988 containerd[1831]: time="2026-03-04T00:50:30.622984860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 00:50:30.623100 containerd[1831]: time="2026-03-04T00:50:30.623078660Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 00:50:30.625243 containerd[1831]: time="2026-03-04T00:50:30.625122580Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 00:50:30.626593 containerd[1831]: time="2026-03-04T00:50:30.626517940Z" level=info msg="Start subscribing containerd event" Mar 4 00:50:30.627091 containerd[1831]: time="2026-03-04T00:50:30.626583620Z" level=info msg="Start recovering state" Mar 4 00:50:30.627091 containerd[1831]: time="2026-03-04T00:50:30.626920940Z" level=info msg="Start event monitor" Mar 4 00:50:30.627091 containerd[1831]: time="2026-03-04T00:50:30.626936580Z" level=info msg="Start snapshots syncer" Mar 4 00:50:30.627091 containerd[1831]: time="2026-03-04T00:50:30.626946300Z" level=info msg="Start cni network conf syncer for default" Mar 4 00:50:30.627091 containerd[1831]: time="2026-03-04T00:50:30.626953860Z" level=info msg="Start streaming server" Mar 4 00:50:30.627481 containerd[1831]: time="2026-03-04T00:50:30.627452260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 00:50:30.627529 containerd[1831]: time="2026-03-04T00:50:30.627514380Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 00:50:30.632362 containerd[1831]: time="2026-03-04T00:50:30.627570220Z" level=info msg="containerd successfully booted in 0.059407s" Mar 4 00:50:30.627698 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 00:50:30.697500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:50:30.703090 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 00:50:30.703525 (kubelet)[1936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:50:30.708245 systemd[1]: Startup finished in 14.116s (kernel) + 13.093s (userspace) = 27.209s. Mar 4 00:50:30.986643 login[1917]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 4 00:50:30.991046 login[1916]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:50:31.000056 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 00:50:31.004711 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 00:50:31.008363 systemd-logind[1769]: New session 2 of user core. Mar 4 00:50:31.038301 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 00:50:31.049011 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 00:50:31.071977 (systemd)[1949]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 00:50:31.180523 kubelet[1936]: E0304 00:50:31.180474 1936 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:50:31.185508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:50:31.185686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:50:31.245848 systemd[1949]: Queued start job for default target default.target. Mar 4 00:50:31.246633 systemd[1949]: Created slice app.slice - User Application Slice. Mar 4 00:50:31.246657 systemd[1949]: Reached target paths.target - Paths. Mar 4 00:50:31.246669 systemd[1949]: Reached target timers.target - Timers. Mar 4 00:50:31.252389 systemd[1949]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 00:50:31.261071 systemd[1949]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 00:50:31.261140 systemd[1949]: Reached target sockets.target - Sockets. Mar 4 00:50:31.261153 systemd[1949]: Reached target basic.target - Basic System. Mar 4 00:50:31.261206 systemd[1949]: Reached target default.target - Main User Target. Mar 4 00:50:31.261233 systemd[1949]: Startup finished in 180ms. Mar 4 00:50:31.261349 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 00:50:31.267620 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 00:50:31.988611 login[1917]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:50:31.992431 systemd-logind[1769]: New session 1 of user core. Mar 4 00:50:32.003637 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 00:50:32.536651 waagent[1914]: 2026-03-04T00:50:32.532411Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 4 00:50:32.537239 waagent[1914]: 2026-03-04T00:50:32.537171Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 4 00:50:32.540997 waagent[1914]: 2026-03-04T00:50:32.540939Z INFO Daemon Daemon Python: 3.11.9 Mar 4 00:50:32.544430 waagent[1914]: 2026-03-04T00:50:32.544373Z INFO Daemon Daemon Run daemon Mar 4 00:50:32.547753 waagent[1914]: 2026-03-04T00:50:32.547707Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 4 00:50:32.554702 waagent[1914]: 2026-03-04T00:50:32.554635Z INFO Daemon Daemon Using waagent for provisioning Mar 4 00:50:32.558863 waagent[1914]: 2026-03-04T00:50:32.558814Z INFO Daemon Daemon Activate resource disk Mar 4 00:50:32.562389 waagent[1914]: 2026-03-04T00:50:32.562340Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 4 00:50:32.571502 waagent[1914]: 2026-03-04T00:50:32.571435Z INFO Daemon Daemon Found device: None Mar 4 00:50:32.575320 waagent[1914]: 2026-03-04T00:50:32.575244Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 4 00:50:32.582151 waagent[1914]: 2026-03-04T00:50:32.582091Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 4 00:50:32.592893 waagent[1914]: 2026-03-04T00:50:32.592828Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 4 00:50:32.597404 waagent[1914]: 2026-03-04T00:50:32.597347Z INFO Daemon Daemon Running default provisioning handler Mar 4 00:50:32.608031 waagent[1914]: 2026-03-04T00:50:32.607962Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 4 00:50:32.618857 waagent[1914]: 2026-03-04T00:50:32.618792Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 4 00:50:32.626881 waagent[1914]: 2026-03-04T00:50:32.626814Z INFO Daemon Daemon cloud-init is enabled: False Mar 4 00:50:32.631091 waagent[1914]: 2026-03-04T00:50:32.631023Z INFO Daemon Daemon Copying ovf-env.xml Mar 4 00:50:32.780237 waagent[1914]: 2026-03-04T00:50:32.780142Z INFO Daemon Daemon Successfully mounted dvd Mar 4 00:50:32.794157 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 4 00:50:32.796098 waagent[1914]: 2026-03-04T00:50:32.795589Z INFO Daemon Daemon Detect protocol endpoint Mar 4 00:50:32.799509 waagent[1914]: 2026-03-04T00:50:32.799457Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 4 00:50:32.803860 waagent[1914]: 2026-03-04T00:50:32.803815Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 4 00:50:32.809632 waagent[1914]: 2026-03-04T00:50:32.809589Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 4 00:50:32.813868 waagent[1914]: 2026-03-04T00:50:32.813828Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 4 00:50:32.817871 waagent[1914]: 2026-03-04T00:50:32.817832Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 4 00:50:32.870916 waagent[1914]: 2026-03-04T00:50:32.870873Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 4 00:50:32.876171 waagent[1914]: 2026-03-04T00:50:32.876147Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 4 00:50:32.880557 waagent[1914]: 2026-03-04T00:50:32.880514Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 4 00:50:33.066838 waagent[1914]: 2026-03-04T00:50:33.066694Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 4 00:50:33.072066 waagent[1914]: 2026-03-04T00:50:33.072013Z INFO Daemon Daemon Forcing an update of the goal state. Mar 4 00:50:33.080205 waagent[1914]: 2026-03-04T00:50:33.080158Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 4 00:50:33.099708 waagent[1914]: 2026-03-04T00:50:33.099665Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 4 00:50:33.104157 waagent[1914]: 2026-03-04T00:50:33.104115Z INFO Daemon Mar 4 00:50:33.106533 waagent[1914]: 2026-03-04T00:50:33.106491Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5c47fc41-1bc8-4f53-9586-75efeb1e5c28 eTag: 8273454217892352163 source: Fabric] Mar 4 00:50:33.114935 waagent[1914]: 2026-03-04T00:50:33.114893Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 4 00:50:33.120114 waagent[1914]: 2026-03-04T00:50:33.120073Z INFO Daemon Mar 4 00:50:33.122446 waagent[1914]: 2026-03-04T00:50:33.122410Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 4 00:50:33.131872 waagent[1914]: 2026-03-04T00:50:33.131841Z INFO Daemon Daemon Downloading artifacts profile blob Mar 4 00:50:33.207327 waagent[1914]: 2026-03-04T00:50:33.207232Z INFO Daemon Downloaded certificate {'thumbprint': 'ADE674303AF92E8A765D6A3CD9C8EE4A558C3600', 'hasPrivateKey': True} Mar 4 00:50:33.215251 waagent[1914]: 2026-03-04T00:50:33.215203Z INFO Daemon Fetch goal state completed Mar 4 00:50:33.225634 waagent[1914]: 2026-03-04T00:50:33.225573Z INFO Daemon Daemon Starting provisioning Mar 4 00:50:33.229364 waagent[1914]: 2026-03-04T00:50:33.229315Z INFO Daemon Daemon Handle ovf-env.xml. Mar 4 00:50:33.233150 waagent[1914]: 2026-03-04T00:50:33.233113Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-4860195aa5] Mar 4 00:50:33.256616 waagent[1914]: 2026-03-04T00:50:33.256543Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-4860195aa5] Mar 4 00:50:33.261356 waagent[1914]: 2026-03-04T00:50:33.261298Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 4 00:50:33.266281 waagent[1914]: 2026-03-04T00:50:33.266234Z INFO Daemon Daemon Primary interface is [eth0] Mar 4 00:50:33.295452 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:50:33.295458 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:50:33.295501 systemd-networkd[1371]: eth0: DHCP lease lost Mar 4 00:50:33.297024 waagent[1914]: 2026-03-04T00:50:33.296957Z INFO Daemon Daemon Create user account if not exists Mar 4 00:50:33.301452 waagent[1914]: 2026-03-04T00:50:33.301400Z INFO Daemon Daemon User core already exists, skip useradd Mar 4 00:50:33.305956 waagent[1914]: 2026-03-04T00:50:33.305908Z INFO Daemon Daemon Configure sudoer Mar 4 00:50:33.306386 systemd-networkd[1371]: eth0: DHCPv6 lease lost Mar 4 00:50:33.309913 waagent[1914]: 2026-03-04T00:50:33.309827Z INFO Daemon Daemon Configure sshd Mar 4 00:50:33.313553 waagent[1914]: 2026-03-04T00:50:33.313504Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 4 00:50:33.323141 waagent[1914]: 2026-03-04T00:50:33.323049Z INFO Daemon Daemon Deploy ssh public key. Mar 4 00:50:33.333377 systemd-networkd[1371]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:50:34.401201 waagent[1914]: 2026-03-04T00:50:34.401150Z INFO Daemon Daemon Provisioning complete Mar 4 00:50:34.417385 waagent[1914]: 2026-03-04T00:50:34.417334Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 4 00:50:34.422565 waagent[1914]: 2026-03-04T00:50:34.422512Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 4 00:50:34.430022 waagent[1914]: 2026-03-04T00:50:34.429978Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 4 00:50:34.560006 waagent[2004]: 2026-03-04T00:50:34.559930Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 4 00:50:34.561030 waagent[2004]: 2026-03-04T00:50:34.560428Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 4 00:50:34.561030 waagent[2004]: 2026-03-04T00:50:34.560501Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 4 00:50:34.601338 waagent[2004]: 2026-03-04T00:50:34.601178Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 4 00:50:34.601461 waagent[2004]: 2026-03-04T00:50:34.601434Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 00:50:34.601532 waagent[2004]: 2026-03-04T00:50:34.601499Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 00:50:34.610161 waagent[2004]: 2026-03-04T00:50:34.610092Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 4 00:50:34.616481 waagent[2004]: 2026-03-04T00:50:34.616439Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 4 00:50:34.616997 waagent[2004]: 2026-03-04T00:50:34.616957Z INFO ExtHandler Mar 4 00:50:34.617066 waagent[2004]: 2026-03-04T00:50:34.617039Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1be4751b-1be0-48f2-957d-bd44d900d73b eTag: 8273454217892352163 source: Fabric] Mar 4 00:50:34.617366 waagent[2004]: 2026-03-04T00:50:34.617328Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 4 00:50:34.617951 waagent[2004]: 2026-03-04T00:50:34.617909Z INFO ExtHandler Mar 4 00:50:34.618011 waagent[2004]: 2026-03-04T00:50:34.617986Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 4 00:50:34.622011 waagent[2004]: 2026-03-04T00:50:34.621977Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 4 00:50:34.693042 waagent[2004]: 2026-03-04T00:50:34.692908Z INFO ExtHandler Downloaded certificate {'thumbprint': 'ADE674303AF92E8A765D6A3CD9C8EE4A558C3600', 'hasPrivateKey': True} Mar 4 00:50:34.693537 waagent[2004]: 2026-03-04T00:50:34.693494Z INFO ExtHandler Fetch goal state completed Mar 4 00:50:34.708026 waagent[2004]: 2026-03-04T00:50:34.707973Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2004 Mar 4 00:50:34.708176 waagent[2004]: 2026-03-04T00:50:34.708143Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 4 00:50:34.709769 waagent[2004]: 2026-03-04T00:50:34.709727Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 4 00:50:34.710124 waagent[2004]: 2026-03-04T00:50:34.710091Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 4 00:50:34.748981 waagent[2004]: 2026-03-04T00:50:34.748937Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 4 00:50:34.749178 waagent[2004]: 2026-03-04T00:50:34.749141Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 4 00:50:34.755255 waagent[2004]: 2026-03-04T00:50:34.755201Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 4 00:50:34.761775 systemd[1]: Reloading requested from client PID 2017 ('systemctl') (unit waagent.service)... Mar 4 00:50:34.761790 systemd[1]: Reloading... Mar 4 00:50:34.847354 zram_generator::config[2051]: No configuration found. Mar 4 00:50:34.958378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:50:35.035498 systemd[1]: Reloading finished in 273 ms. Mar 4 00:50:35.055635 waagent[2004]: 2026-03-04T00:50:35.055538Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 4 00:50:35.061494 systemd[1]: Reloading requested from client PID 2110 ('systemctl') (unit waagent.service)... Mar 4 00:50:35.061634 systemd[1]: Reloading... Mar 4 00:50:35.131341 zram_generator::config[2147]: No configuration found. Mar 4 00:50:35.245515 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:50:35.319042 systemd[1]: Reloading finished in 257 ms. Mar 4 00:50:35.341066 waagent[2004]: 2026-03-04T00:50:35.340975Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 4 00:50:35.341170 waagent[2004]: 2026-03-04T00:50:35.341137Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 4 00:50:35.755115 waagent[2004]: 2026-03-04T00:50:35.755037Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 4 00:50:35.755705 waagent[2004]: 2026-03-04T00:50:35.755658Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 4 00:50:35.756487 waagent[2004]: 2026-03-04T00:50:35.756410Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 4 00:50:35.756871 waagent[2004]: 2026-03-04T00:50:35.756780Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 4 00:50:35.757268 waagent[2004]: 2026-03-04T00:50:35.757148Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 4 00:50:35.757375 waagent[2004]: 2026-03-04T00:50:35.757270Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 4 00:50:35.757689 waagent[2004]: 2026-03-04T00:50:35.757635Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 00:50:35.757956 waagent[2004]: 2026-03-04T00:50:35.757884Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 4 00:50:35.758133 waagent[2004]: 2026-03-04T00:50:35.758053Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 4 00:50:35.758427 waagent[2004]: 2026-03-04T00:50:35.758354Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 4 00:50:35.758650 waagent[2004]: 2026-03-04T00:50:35.758586Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 00:50:35.760007 waagent[2004]: 2026-03-04T00:50:35.759149Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 00:50:35.760954 waagent[2004]: 2026-03-04T00:50:35.760112Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 00:50:35.760954 waagent[2004]: 2026-03-04T00:50:35.760280Z INFO EnvHandler ExtHandler Configure routes Mar 4 00:50:35.760954 waagent[2004]: 2026-03-04T00:50:35.760380Z INFO EnvHandler ExtHandler Gateway:None Mar 4 00:50:35.760954 waagent[2004]: 2026-03-04T00:50:35.760432Z INFO EnvHandler ExtHandler Routes:None Mar 4 00:50:35.761369 waagent[2004]: 2026-03-04T00:50:35.761297Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 4 00:50:35.761650 waagent[2004]: 2026-03-04T00:50:35.761606Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 4 00:50:35.761650 waagent[2004]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 4 00:50:35.761650 waagent[2004]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 4 00:50:35.761650 waagent[2004]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 4 00:50:35.761650 waagent[2004]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 4 00:50:35.761650 waagent[2004]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 4 00:50:35.761650 waagent[2004]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 4 00:50:35.770219 waagent[2004]: 2026-03-04T00:50:35.770174Z INFO ExtHandler ExtHandler Mar 4 00:50:35.770339 waagent[2004]: 2026-03-04T00:50:35.770277Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 33cf427c-c0a9-4308-989a-355091c2bdb1 correlation 8bafc3eb-2f5f-4370-bd46-56b926315e78 created: 2026-03-04T00:49:33.236977Z] Mar 4 00:50:35.771109 waagent[2004]: 2026-03-04T00:50:35.771055Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 4 00:50:35.772439 waagent[2004]: 2026-03-04T00:50:35.772381Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Mar 4 00:50:35.807045 waagent[2004]: 2026-03-04T00:50:35.806976Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: DFC54E3C-EF12-48A4-AE95-9AF7C2F2B47A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 4 00:50:35.845122 waagent[2004]: 2026-03-04T00:50:35.845039Z INFO MonitorHandler ExtHandler Network interfaces: Mar 4 00:50:35.845122 waagent[2004]: Executing ['ip', '-a', '-o', 'link']: Mar 4 00:50:35.845122 waagent[2004]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 4 00:50:35.845122 waagent[2004]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c0:74:aa brd ff:ff:ff:ff:ff:ff Mar 4 00:50:35.845122 waagent[2004]: 3: enP65506s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c0:74:aa brd ff:ff:ff:ff:ff:ff\ altname enP65506p0s2 Mar 4 00:50:35.845122 waagent[2004]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 4 00:50:35.845122 waagent[2004]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 4 00:50:35.845122 waagent[2004]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 4 00:50:35.845122 waagent[2004]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 4 00:50:35.845122 waagent[2004]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 4 00:50:35.845122 waagent[2004]: 2: eth0 inet6 fe80::222:48ff:fec0:74aa/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 4 00:50:35.892275 waagent[2004]: 2026-03-04T00:50:35.892204Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 4 00:50:35.892275 waagent[2004]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:50:35.892275 waagent[2004]: pkts bytes target prot opt in out source destination Mar 4 00:50:35.892275 waagent[2004]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:50:35.892275 waagent[2004]: pkts bytes target prot opt in out source destination Mar 4 00:50:35.892275 waagent[2004]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:50:35.892275 waagent[2004]: pkts bytes target prot opt in out source destination Mar 4 00:50:35.892275 waagent[2004]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 4 00:50:35.892275 waagent[2004]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 4 00:50:35.892275 waagent[2004]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 4 00:50:35.896345 waagent[2004]: 2026-03-04T00:50:35.896279Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 4 00:50:35.896345 waagent[2004]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:50:35.896345 waagent[2004]: pkts bytes target prot opt in out source destination Mar 4 00:50:35.896345 waagent[2004]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:50:35.896345 waagent[2004]: pkts bytes target prot opt in out source destination Mar 4 00:50:35.896345 waagent[2004]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:50:35.896345 waagent[2004]: pkts bytes target prot opt in out source destination Mar 4 00:50:35.896345 waagent[2004]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 4 00:50:35.896345 waagent[2004]: 12 1214 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 4 00:50:35.896345 waagent[2004]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 4 00:50:35.896624 waagent[2004]: 2026-03-04T00:50:35.896586Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 4 00:50:41.360294 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 00:50:41.367476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:50:41.474174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:50:41.478809 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:50:41.593948 kubelet[2246]: E0304 00:50:41.593879 2246 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:50:41.597570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:50:41.597770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:50:49.304042 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 00:50:49.314601 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:32986.service - OpenSSH per-connection server daemon (10.200.16.10:32986). Mar 4 00:50:49.890089 sshd[2254]: Accepted publickey for core from 10.200.16.10 port 32986 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:50:49.891446 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:50:49.895755 systemd-logind[1769]: New session 3 of user core. Mar 4 00:50:49.910686 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 00:50:50.323543 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:58620.service - OpenSSH per-connection server daemon (10.200.16.10:58620). Mar 4 00:50:50.806163 sshd[2259]: Accepted publickey for core from 10.200.16.10 port 58620 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:50:50.806980 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:50:50.810907 systemd-logind[1769]: New session 4 of user core. Mar 4 00:50:50.817677 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 00:50:51.151546 sshd[2259]: pam_unix(sshd:session): session closed for user core Mar 4 00:50:51.154809 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:58620.service: Deactivated successfully. Mar 4 00:50:51.157716 systemd-logind[1769]: Session 4 logged out. Waiting for processes to exit. Mar 4 00:50:51.158226 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 00:50:51.159140 systemd-logind[1769]: Removed session 4. Mar 4 00:50:51.244585 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:58630.service - OpenSSH per-connection server daemon (10.200.16.10:58630). Mar 4 00:50:51.610406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 00:50:51.618491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:50:51.725327 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 58630 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:50:51.726956 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:50:51.732403 systemd-logind[1769]: New session 5 of user core. Mar 4 00:50:51.735609 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 00:50:51.966505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:50:51.969105 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:50:52.009428 kubelet[2283]: E0304 00:50:52.009375 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:50:52.014512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:50:52.014673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:50:52.066872 sshd[2267]: pam_unix(sshd:session): session closed for user core Mar 4 00:50:52.069763 systemd-logind[1769]: Session 5 logged out. Waiting for processes to exit. Mar 4 00:50:52.072662 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:58630.service: Deactivated successfully. Mar 4 00:50:52.075380 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 00:50:52.076595 systemd-logind[1769]: Removed session 5. Mar 4 00:50:52.153583 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:58636.service - OpenSSH per-connection server daemon (10.200.16.10:58636). Mar 4 00:50:52.638604 sshd[2295]: Accepted publickey for core from 10.200.16.10 port 58636 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:50:52.639451 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:50:52.643351 systemd-logind[1769]: New session 6 of user core. Mar 4 00:50:52.653639 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 00:50:52.990762 sshd[2295]: pam_unix(sshd:session): session closed for user core Mar 4 00:50:52.994100 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:58636.service: Deactivated successfully. Mar 4 00:50:52.996945 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 00:50:52.997580 systemd-logind[1769]: Session 6 logged out. Waiting for processes to exit. Mar 4 00:50:52.998389 systemd-logind[1769]: Removed session 6. Mar 4 00:50:53.078591 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:58638.service - OpenSSH per-connection server daemon (10.200.16.10:58638). Mar 4 00:50:53.247073 chronyd[1755]: Selected source PHC0 Mar 4 00:50:53.560163 sshd[2303]: Accepted publickey for core from 10.200.16.10 port 58638 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:50:53.561442 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:50:53.565344 systemd-logind[1769]: New session 7 of user core. Mar 4 00:50:53.574761 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 00:50:54.102054 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 00:50:54.102346 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:50:54.117077 sudo[2307]: pam_unix(sudo:session): session closed for user root Mar 4 00:50:54.189736 sshd[2303]: pam_unix(sshd:session): session closed for user core Mar 4 00:50:54.194470 systemd-logind[1769]: Session 7 logged out. Waiting for processes to exit. Mar 4 00:50:54.195192 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:58638.service: Deactivated successfully. Mar 4 00:50:54.198041 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 00:50:54.199225 systemd-logind[1769]: Removed session 7. Mar 4 00:50:54.276549 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:58648.service - OpenSSH per-connection server daemon (10.200.16.10:58648). Mar 4 00:50:54.757439 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 58648 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:50:54.758292 sshd[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:50:54.762242 systemd-logind[1769]: New session 8 of user core. Mar 4 00:50:54.771624 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 00:50:55.030851 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 00:50:55.031128 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:50:55.034167 sudo[2317]: pam_unix(sudo:session): session closed for user root Mar 4 00:50:55.038916 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 00:50:55.039543 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:50:55.059541 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 00:50:55.061532 auditctl[2320]: No rules Mar 4 00:50:55.063594 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 00:50:55.063880 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 00:50:55.066773 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 00:50:55.089278 augenrules[2339]: No rules Mar 4 00:50:55.092220 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 00:50:55.094093 sudo[2316]: pam_unix(sudo:session): session closed for user root Mar 4 00:50:55.173048 sshd[2312]: pam_unix(sshd:session): session closed for user core Mar 4 00:50:55.178511 systemd-logind[1769]: Session 8 logged out. Waiting for processes to exit. Mar 4 00:50:55.178778 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:58648.service: Deactivated successfully. Mar 4 00:50:55.181090 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 00:50:55.181640 systemd-logind[1769]: Removed session 8. Mar 4 00:50:55.257534 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:58658.service - OpenSSH per-connection server daemon (10.200.16.10:58658). Mar 4 00:50:55.740748 sshd[2348]: Accepted publickey for core from 10.200.16.10 port 58658 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:50:55.742081 sshd[2348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:50:55.746038 systemd-logind[1769]: New session 9 of user core. Mar 4 00:50:55.753627 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 00:50:56.015321 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 00:50:56.015595 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:50:57.062532 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 00:50:57.063601 (dockerd)[2367]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 00:50:57.765921 dockerd[2367]: time="2026-03-04T00:50:57.765646449Z" level=info msg="Starting up" Mar 4 00:50:58.473544 dockerd[2367]: time="2026-03-04T00:50:58.473486209Z" level=info msg="Loading containers: start." Mar 4 00:50:58.649463 kernel: Initializing XFRM netlink socket Mar 4 00:50:58.813236 systemd-networkd[1371]: docker0: Link UP Mar 4 00:50:58.841536 dockerd[2367]: time="2026-03-04T00:50:58.841490609Z" level=info msg="Loading containers: done." Mar 4 00:50:58.856015 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3309867642-merged.mount: Deactivated successfully. Mar 4 00:50:58.868475 dockerd[2367]: time="2026-03-04T00:50:58.868432249Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 00:50:58.868571 dockerd[2367]: time="2026-03-04T00:50:58.868543089Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 00:50:58.868679 dockerd[2367]: time="2026-03-04T00:50:58.868657449Z" level=info msg="Daemon has completed initialization" Mar 4 00:50:58.923713 dockerd[2367]: time="2026-03-04T00:50:58.923357929Z" level=info msg="API listen on /run/docker.sock" Mar 4 00:50:58.923543 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 00:50:59.396473 containerd[1831]: time="2026-03-04T00:50:59.396167129Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 4 00:51:00.207257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount117150874.mount: Deactivated successfully. Mar 4 00:51:02.110221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 4 00:51:02.112061 containerd[1831]: time="2026-03-04T00:51:02.111946001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:02.115330 containerd[1831]: time="2026-03-04T00:51:02.114740548Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 4 00:51:02.118131 containerd[1831]: time="2026-03-04T00:51:02.118091813Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:02.120159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:51:02.126849 containerd[1831]: time="2026-03-04T00:51:02.123149149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:02.127127 containerd[1831]: time="2026-03-04T00:51:02.127083411Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.730874602s" Mar 4 00:51:02.127210 containerd[1831]: time="2026-03-04T00:51:02.127195331Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 4 00:51:02.127858 containerd[1831]: time="2026-03-04T00:51:02.127838488Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 4 00:51:02.239505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:51:02.243055 (kubelet)[2572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:51:02.279049 kubelet[2572]: E0304 00:51:02.278981 2572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:51:02.282391 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:51:02.282564 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:51:04.512608 containerd[1831]: time="2026-03-04T00:51:04.512520213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:04.515559 containerd[1831]: time="2026-03-04T00:51:04.515328450Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 4 00:51:04.520085 containerd[1831]: time="2026-03-04T00:51:04.518712927Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:04.523411 containerd[1831]: time="2026-03-04T00:51:04.523382323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:04.524234 containerd[1831]: time="2026-03-04T00:51:04.524132242Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 2.396181075s" Mar 4 00:51:04.525133 containerd[1831]: time="2026-03-04T00:51:04.525113761Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 4 00:51:04.525573 containerd[1831]: time="2026-03-04T00:51:04.525554681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 4 00:51:05.977342 containerd[1831]: time="2026-03-04T00:51:05.976459866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:05.979206 containerd[1831]: time="2026-03-04T00:51:05.979174183Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 4 00:51:05.982525 containerd[1831]: time="2026-03-04T00:51:05.982497740Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:05.987325 containerd[1831]: time="2026-03-04T00:51:05.987175696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:05.988543 containerd[1831]: time="2026-03-04T00:51:05.988466775Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.462786734s" Mar 4 00:51:05.988543 containerd[1831]: time="2026-03-04T00:51:05.988501575Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 4 00:51:05.989216 containerd[1831]: time="2026-03-04T00:51:05.989188254Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 4 00:51:06.962630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839490237.mount: Deactivated successfully. Mar 4 00:51:07.277498 containerd[1831]: time="2026-03-04T00:51:07.277391709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:07.280112 containerd[1831]: time="2026-03-04T00:51:07.280078106Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 4 00:51:07.283063 containerd[1831]: time="2026-03-04T00:51:07.283017904Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:07.287153 containerd[1831]: time="2026-03-04T00:51:07.287110780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:07.287813 containerd[1831]: time="2026-03-04T00:51:07.287656819Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.298435285s" Mar 4 00:51:07.287813 containerd[1831]: time="2026-03-04T00:51:07.287690419Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 4 00:51:07.288579 containerd[1831]: time="2026-03-04T00:51:07.288161619Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 4 00:51:07.913077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021424114.mount: Deactivated successfully. Mar 4 00:51:09.191255 containerd[1831]: time="2026-03-04T00:51:09.191206988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:09.194616 containerd[1831]: time="2026-03-04T00:51:09.194407745Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 4 00:51:09.197966 containerd[1831]: time="2026-03-04T00:51:09.197602982Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:09.202423 containerd[1831]: time="2026-03-04T00:51:09.202374578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:09.203744 containerd[1831]: time="2026-03-04T00:51:09.203606336Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.915416997s" Mar 4 00:51:09.203744 containerd[1831]: time="2026-03-04T00:51:09.203642536Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 4 00:51:09.204740 containerd[1831]: time="2026-03-04T00:51:09.204717775Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 4 00:51:09.775510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445958303.mount: Deactivated successfully. Mar 4 00:51:09.794732 containerd[1831]: time="2026-03-04T00:51:09.794686597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:09.797315 containerd[1831]: time="2026-03-04T00:51:09.797132276Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 4 00:51:09.800324 containerd[1831]: time="2026-03-04T00:51:09.800080275Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:09.804201 containerd[1831]: time="2026-03-04T00:51:09.804156114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:09.805368 containerd[1831]: time="2026-03-04T00:51:09.804864313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 600.030338ms" Mar 4 00:51:09.805368 containerd[1831]: time="2026-03-04T00:51:09.804894353Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 4 00:51:09.805795 containerd[1831]: time="2026-03-04T00:51:09.805754073Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 4 00:51:10.538163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount79434187.mount: Deactivated successfully. Mar 4 00:51:10.585326 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 4 00:51:12.360276 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 4 00:51:12.365479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:51:12.514553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:51:12.525907 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:51:12.556300 kubelet[2676]: E0304 00:51:12.556237 2676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:51:12.558502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:51:12.558640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:51:13.715142 containerd[1831]: time="2026-03-04T00:51:13.715094196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:13.721950 containerd[1831]: time="2026-03-04T00:51:13.721914394Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 4 00:51:13.725229 containerd[1831]: time="2026-03-04T00:51:13.725200353Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:13.730665 containerd[1831]: time="2026-03-04T00:51:13.730632071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:13.732831 containerd[1831]: time="2026-03-04T00:51:13.732798350Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 3.927015077s" Mar 4 00:51:13.732868 containerd[1831]: time="2026-03-04T00:51:13.732835470Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 4 00:51:14.515207 update_engine[1774]: I20260304 00:51:14.514329 1774 update_attempter.cc:509] Updating boot flags... Mar 4 00:51:14.582626 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2767) Mar 4 00:51:14.693569 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2760) Mar 4 00:51:18.491761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:51:18.498518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:51:18.529272 systemd[1]: Reloading requested from client PID 2829 ('systemctl') (unit session-9.scope)... Mar 4 00:51:18.529291 systemd[1]: Reloading... Mar 4 00:51:18.620339 zram_generator::config[2872]: No configuration found. Mar 4 00:51:18.748283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:51:18.824601 systemd[1]: Reloading finished in 294 ms. Mar 4 00:51:19.018326 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 4 00:51:19.018440 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 4 00:51:19.019738 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:51:19.026609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:51:19.130482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:51:19.134948 (kubelet)[2946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 00:51:19.240345 kubelet[2946]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 00:51:19.241053 kubelet[2946]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 00:51:19.241053 kubelet[2946]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 00:51:19.241142 kubelet[2946]: I0304 00:51:19.241063 2946 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 00:51:19.961101 kubelet[2946]: I0304 00:51:19.961064 2946 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 00:51:19.961387 kubelet[2946]: I0304 00:51:19.961230 2946 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 00:51:19.961642 kubelet[2946]: I0304 00:51:19.961628 2946 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 00:51:19.985337 kubelet[2946]: E0304 00:51:19.984999 2946 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 00:51:19.986584 kubelet[2946]: I0304 00:51:19.986560 2946 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 00:51:19.993988 kubelet[2946]: E0304 00:51:19.993951 2946 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 00:51:19.994106 kubelet[2946]: I0304 00:51:19.994095 2946 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 00:51:19.997524 kubelet[2946]: I0304 00:51:19.997503 2946 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 00:51:19.999036 kubelet[2946]: I0304 00:51:19.999005 2946 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 00:51:19.999291 kubelet[2946]: I0304 00:51:19.999133 2946 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-4860195aa5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 4 00:51:19.999625 kubelet[2946]: I0304 00:51:19.999432 2946 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 00:51:19.999625 kubelet[2946]: I0304 00:51:19.999446 2946 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 00:51:19.999625 kubelet[2946]: I0304 00:51:19.999580 2946 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:51:20.002399 kubelet[2946]: I0304 00:51:20.002384 2946 kubelet.go:480] "Attempting to sync node with API server" Mar 4 00:51:20.002508 kubelet[2946]: I0304 00:51:20.002497 2946 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 00:51:20.002578 kubelet[2946]: I0304 00:51:20.002571 2946 kubelet.go:386] "Adding apiserver pod source" Mar 4 00:51:20.003947 kubelet[2946]: I0304 00:51:20.003930 2946 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 00:51:20.007862 kubelet[2946]: E0304 00:51:20.007542 2946 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-4860195aa5&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 00:51:20.007953 kubelet[2946]: E0304 00:51:20.007927 2946 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 00:51:20.008039 kubelet[2946]: I0304 00:51:20.008023 2946 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 00:51:20.008821 kubelet[2946]: I0304 00:51:20.008605 2946 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 00:51:20.008821 kubelet[2946]: W0304 00:51:20.008668 2946 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 00:51:20.012297 kubelet[2946]: I0304 00:51:20.012276 2946 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 00:51:20.012399 kubelet[2946]: I0304 00:51:20.012334 2946 server.go:1289] "Started kubelet" Mar 4 00:51:20.014814 kubelet[2946]: I0304 00:51:20.014295 2946 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 00:51:20.014814 kubelet[2946]: I0304 00:51:20.014603 2946 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 00:51:20.014914 kubelet[2946]: I0304 00:51:20.014903 2946 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 00:51:20.015790 kubelet[2946]: I0304 00:51:20.015768 2946 server.go:317] "Adding debug handlers to kubelet server" Mar 4 00:51:20.018024 kubelet[2946]: I0304 00:51:20.018002 2946 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 00:51:20.021428 kubelet[2946]: E0304 00:51:20.017563 2946 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-4860195aa5.18997d1a3e25e1d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-4860195aa5,UID:ci-4081.3.6-n-4860195aa5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-4860195aa5,},FirstTimestamp:2026-03-04 00:51:20.012292566 +0000 UTC m=+0.874158759,LastTimestamp:2026-03-04 00:51:20.012292566 +0000 UTC m=+0.874158759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-4860195aa5,}" Mar 4 00:51:20.024116 kubelet[2946]: I0304 00:51:20.023989 2946 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 00:51:20.024281 kubelet[2946]: I0304 00:51:20.024263 2946 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 00:51:20.026087 kubelet[2946]: I0304 00:51:20.026060 2946 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 00:51:20.026154 kubelet[2946]: I0304 00:51:20.026118 2946 reconciler.go:26] "Reconciler: start to sync state" Mar 4 00:51:20.027959 kubelet[2946]: I0304 00:51:20.027911 2946 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 00:51:20.028171 kubelet[2946]: E0304 00:51:20.028147 2946 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 00:51:20.028455 kubelet[2946]: E0304 00:51:20.028430 2946 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 00:51:20.029133 kubelet[2946]: E0304 00:51:20.029095 2946 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4860195aa5\" not found" Mar 4 00:51:20.029459 kubelet[2946]: E0304 00:51:20.029212 2946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4860195aa5?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" Mar 4 00:51:20.033599 kubelet[2946]: I0304 00:51:20.033568 2946 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 00:51:20.035395 kubelet[2946]: I0304 00:51:20.034868 2946 factory.go:223] Registration of the containerd container factory successfully Mar 4 00:51:20.035395 kubelet[2946]: I0304 00:51:20.034885 2946 factory.go:223] Registration of the systemd container factory successfully Mar 4 00:51:20.064710 kubelet[2946]: I0304 00:51:20.064683 2946 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 00:51:20.064859 kubelet[2946]: I0304 00:51:20.064850 2946 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 00:51:20.064925 kubelet[2946]: I0304 00:51:20.064916 2946 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 00:51:20.064965 kubelet[2946]: I0304 00:51:20.064959 2946 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 00:51:20.065053 kubelet[2946]: E0304 00:51:20.065036 2946 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 00:51:20.065783 kubelet[2946]: E0304 00:51:20.065747 2946 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 00:51:20.093915 kubelet[2946]: I0304 00:51:20.093890 2946 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 00:51:20.093915 kubelet[2946]: I0304 00:51:20.093907 2946 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 00:51:20.093915 kubelet[2946]: I0304 00:51:20.093925 2946 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:51:20.099910 kubelet[2946]: I0304 00:51:20.099889 2946 policy_none.go:49] "None policy: Start" Mar 4 00:51:20.099910 kubelet[2946]: I0304 00:51:20.099913 2946 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 00:51:20.099998 kubelet[2946]: I0304 00:51:20.099925 2946 state_mem.go:35] "Initializing new in-memory state store" Mar 4 00:51:20.106844 kubelet[2946]: E0304 00:51:20.106817 2946 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 00:51:20.107020 kubelet[2946]: I0304 00:51:20.107004 2946 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 00:51:20.107049 kubelet[2946]: I0304 00:51:20.107021 2946 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 00:51:20.108293 kubelet[2946]: I0304 00:51:20.108274 2946 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 00:51:20.110809 kubelet[2946]: E0304 00:51:20.110787 2946 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 00:51:20.110941 kubelet[2946]: E0304 00:51:20.110828 2946 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-4860195aa5\" not found" Mar 4 00:51:20.175004 kubelet[2946]: E0304 00:51:20.174957 2946 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4860195aa5\" not found" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.181843 kubelet[2946]: E0304 00:51:20.181812 2946 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4860195aa5\" not found" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.189166 kubelet[2946]: E0304 00:51:20.189137 2946 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4860195aa5\" not found" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.208518 kubelet[2946]: I0304 00:51:20.208488 2946 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.208822 kubelet[2946]: E0304 00:51:20.208799 2946 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.228381 kubelet[2946]: I0304 00:51:20.227173 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f56ef64b52aa3fec4f1c1fcca53d32b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-4860195aa5\" (UID: \"9f56ef64b52aa3fec4f1c1fcca53d32b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.228381 kubelet[2946]: I0304 00:51:20.227200 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1974f18e0ef44bbf574d777da0d2a8e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-4860195aa5\" (UID: \"d1974f18e0ef44bbf574d777da0d2a8e\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.228381 kubelet[2946]: I0304 00:51:20.227219 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f56ef64b52aa3fec4f1c1fcca53d32b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-4860195aa5\" (UID: \"9f56ef64b52aa3fec4f1c1fcca53d32b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.228381 kubelet[2946]: I0304 00:51:20.227257 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f56ef64b52aa3fec4f1c1fcca53d32b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4860195aa5\" (UID: \"9f56ef64b52aa3fec4f1c1fcca53d32b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.228381 kubelet[2946]: I0304 00:51:20.227273 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f56ef64b52aa3fec4f1c1fcca53d32b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-4860195aa5\" (UID: \"9f56ef64b52aa3fec4f1c1fcca53d32b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.228546 kubelet[2946]: I0304 00:51:20.227287 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f21e8f31ad8622a2d9ce38c3b1c773aa-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4860195aa5\" (UID: \"f21e8f31ad8622a2d9ce38c3b1c773aa\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.228546 kubelet[2946]: I0304 00:51:20.227300 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f21e8f31ad8622a2d9ce38c3b1c773aa-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4860195aa5\" (UID: \"f21e8f31ad8622a2d9ce38c3b1c773aa\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.228546 kubelet[2946]: I0304 00:51:20.227329 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f21e8f31ad8622a2d9ce38c3b1c773aa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-4860195aa5\" (UID: \"f21e8f31ad8622a2d9ce38c3b1c773aa\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.228546 kubelet[2946]: I0304 00:51:20.227344 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f56ef64b52aa3fec4f1c1fcca53d32b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4860195aa5\" (UID: \"9f56ef64b52aa3fec4f1c1fcca53d32b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.230443 kubelet[2946]: E0304 00:51:20.230408 2946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4860195aa5?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" Mar 4 00:51:20.410555 kubelet[2946]: I0304 00:51:20.410505 2946 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.411043 kubelet[2946]: E0304 00:51:20.410830 2946 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.478138 containerd[1831]: time="2026-03-04T00:51:20.477862742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-4860195aa5,Uid:f21e8f31ad8622a2d9ce38c3b1c773aa,Namespace:kube-system,Attempt:0,}" Mar 4 00:51:20.483143 containerd[1831]: time="2026-03-04T00:51:20.483054297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-4860195aa5,Uid:9f56ef64b52aa3fec4f1c1fcca53d32b,Namespace:kube-system,Attempt:0,}" Mar 4 00:51:20.493111 containerd[1831]: time="2026-03-04T00:51:20.492506569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-4860195aa5,Uid:d1974f18e0ef44bbf574d777da0d2a8e,Namespace:kube-system,Attempt:0,}" Mar 4 00:51:20.631788 kubelet[2946]: E0304 00:51:20.631737 2946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4860195aa5?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" Mar 4 00:51:20.812897 kubelet[2946]: I0304 00:51:20.812811 2946 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:20.813150 kubelet[2946]: E0304 00:51:20.813102 2946 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:21.067126 kubelet[2946]: E0304 00:51:21.066963 2946 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 00:51:21.091939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499213883.mount: Deactivated successfully. Mar 4 00:51:21.116815 containerd[1831]: time="2026-03-04T00:51:21.116758174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:51:21.119592 containerd[1831]: time="2026-03-04T00:51:21.119555452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 4 00:51:21.122236 containerd[1831]: time="2026-03-04T00:51:21.122202969Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:51:21.125767 containerd[1831]: time="2026-03-04T00:51:21.125026127Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:51:21.128084 containerd[1831]: time="2026-03-04T00:51:21.128052925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 00:51:21.131336 containerd[1831]: time="2026-03-04T00:51:21.130966002Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:51:21.133273 containerd[1831]: time="2026-03-04T00:51:21.133216280Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 00:51:21.138325 containerd[1831]: time="2026-03-04T00:51:21.137266237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:51:21.138325 containerd[1831]: time="2026-03-04T00:51:21.138078196Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 660.140134ms" Mar 4 00:51:21.139784 containerd[1831]: time="2026-03-04T00:51:21.139748795Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 656.618378ms" Mar 4 00:51:21.141919 containerd[1831]: time="2026-03-04T00:51:21.141883113Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 649.305184ms" Mar 4 00:51:21.143385 kubelet[2946]: E0304 00:51:21.143345 2946 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-4860195aa5&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 00:51:21.415273 kubelet[2946]: E0304 00:51:21.415228 2946 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 00:51:21.432520 kubelet[2946]: E0304 00:51:21.432481 2946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4860195aa5?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="1.6s" Mar 4 00:51:21.614359 kubelet[2946]: E0304 00:51:21.614324 2946 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 00:51:21.615666 kubelet[2946]: I0304 00:51:21.615643 2946 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:21.615998 kubelet[2946]: E0304 00:51:21.615973 2946 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:21.816907 containerd[1831]: time="2026-03-04T00:51:21.816757396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:51:21.817946 containerd[1831]: time="2026-03-04T00:51:21.817480595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:51:21.817946 containerd[1831]: time="2026-03-04T00:51:21.817631395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:21.817946 containerd[1831]: time="2026-03-04T00:51:21.817735595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:21.821947 containerd[1831]: time="2026-03-04T00:51:21.821871151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:51:21.822358 containerd[1831]: time="2026-03-04T00:51:21.822323111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:51:21.822526 containerd[1831]: time="2026-03-04T00:51:21.822469951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:21.822739 containerd[1831]: time="2026-03-04T00:51:21.822680831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:21.824290 containerd[1831]: time="2026-03-04T00:51:21.823716910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:51:21.824290 containerd[1831]: time="2026-03-04T00:51:21.823764990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:51:21.824290 containerd[1831]: time="2026-03-04T00:51:21.823780430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:21.824290 containerd[1831]: time="2026-03-04T00:51:21.823846630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:21.899110 containerd[1831]: time="2026-03-04T00:51:21.899054768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-4860195aa5,Uid:9f56ef64b52aa3fec4f1c1fcca53d32b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7a65e46eb571e9d0f2be0eafa13ad0d44e152099265548a87166c7d5978cf39\"" Mar 4 00:51:21.905573 containerd[1831]: time="2026-03-04T00:51:21.905536242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-4860195aa5,Uid:f21e8f31ad8622a2d9ce38c3b1c773aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fb063a4df484e03a7ced04b384a1f174162a9e4925cf7e0b056562d4e444a15\"" Mar 4 00:51:21.908682 containerd[1831]: time="2026-03-04T00:51:21.908627640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-4860195aa5,Uid:d1974f18e0ef44bbf574d777da0d2a8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2b439394f15e683f5a9c62281c645cea820d02563bcc1fcff4682a3d5a04215\"" Mar 4 00:51:21.911782 containerd[1831]: time="2026-03-04T00:51:21.911668957Z" level=info msg="CreateContainer within sandbox \"f7a65e46eb571e9d0f2be0eafa13ad0d44e152099265548a87166c7d5978cf39\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 00:51:21.916458 containerd[1831]: time="2026-03-04T00:51:21.916428753Z" level=info msg="CreateContainer within sandbox \"1fb063a4df484e03a7ced04b384a1f174162a9e4925cf7e0b056562d4e444a15\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 00:51:21.923911 containerd[1831]: time="2026-03-04T00:51:21.923880747Z" level=info msg="CreateContainer within sandbox \"a2b439394f15e683f5a9c62281c645cea820d02563bcc1fcff4682a3d5a04215\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 00:51:21.973784 containerd[1831]: time="2026-03-04T00:51:21.973738506Z" level=info msg="CreateContainer within sandbox \"f7a65e46eb571e9d0f2be0eafa13ad0d44e152099265548a87166c7d5978cf39\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f01dd33d4b9ebf6415a8bac4766e22eb5507142e1a09a9e3fd96accdb436d9c0\"" Mar 4 00:51:21.978216 containerd[1831]: time="2026-03-04T00:51:21.977860783Z" level=info msg="CreateContainer within sandbox \"1fb063a4df484e03a7ced04b384a1f174162a9e4925cf7e0b056562d4e444a15\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ecc189072c60e50b2ce289e67f297877ce3b959248db03a1fa2ff5d4bd588400\"" Mar 4 00:51:21.978216 containerd[1831]: time="2026-03-04T00:51:21.978098742Z" level=info msg="StartContainer for \"f01dd33d4b9ebf6415a8bac4766e22eb5507142e1a09a9e3fd96accdb436d9c0\"" Mar 4 00:51:21.986096 containerd[1831]: time="2026-03-04T00:51:21.986062216Z" level=info msg="StartContainer for \"ecc189072c60e50b2ce289e67f297877ce3b959248db03a1fa2ff5d4bd588400\"" Mar 4 00:51:21.987568 containerd[1831]: time="2026-03-04T00:51:21.987205695Z" level=info msg="CreateContainer within sandbox \"a2b439394f15e683f5a9c62281c645cea820d02563bcc1fcff4682a3d5a04215\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"efebf08725b5293d1686b35743099c5d1c7c905d1ea0183b8bacbe34b1586f92\"" Mar 4 00:51:21.987784 containerd[1831]: time="2026-03-04T00:51:21.987762974Z" level=info msg="StartContainer for \"efebf08725b5293d1686b35743099c5d1c7c905d1ea0183b8bacbe34b1586f92\"" Mar 4 00:51:22.067395 containerd[1831]: time="2026-03-04T00:51:22.064256871Z" level=info msg="StartContainer for \"f01dd33d4b9ebf6415a8bac4766e22eb5507142e1a09a9e3fd96accdb436d9c0\" returns successfully" Mar 4 00:51:22.095318 containerd[1831]: time="2026-03-04T00:51:22.093157487Z" level=info msg="StartContainer for \"ecc189072c60e50b2ce289e67f297877ce3b959248db03a1fa2ff5d4bd588400\" returns successfully" Mar 4 00:51:22.095318 containerd[1831]: time="2026-03-04T00:51:22.093300167Z" level=info msg="StartContainer for \"efebf08725b5293d1686b35743099c5d1c7c905d1ea0183b8bacbe34b1586f92\" returns successfully" Mar 4 00:51:22.107515 kubelet[2946]: E0304 00:51:22.105697 2946 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4860195aa5\" not found" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:22.112481 kubelet[2946]: E0304 00:51:22.112451 2946 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 00:51:23.110333 kubelet[2946]: E0304 00:51:23.108633 2946 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4860195aa5\" not found" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:23.114780 kubelet[2946]: E0304 00:51:23.114609 2946 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4860195aa5\" not found" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:23.114780 kubelet[2946]: E0304 00:51:23.114620 2946 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4860195aa5\" not found" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:23.223328 kubelet[2946]: I0304 00:51:23.222655 2946 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:24.010693 kubelet[2946]: I0304 00:51:24.010663 2946 apiserver.go:52] "Watching apiserver" Mar 4 00:51:24.049721 kubelet[2946]: E0304 00:51:24.049676 2946 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-4860195aa5\" not found" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:24.116891 kubelet[2946]: E0304 00:51:24.116697 2946 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4860195aa5\" not found" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:24.116891 kubelet[2946]: E0304 00:51:24.116839 2946 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4860195aa5\" not found" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:24.126249 kubelet[2946]: I0304 00:51:24.126208 2946 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 00:51:24.628637 kubelet[2946]: I0304 00:51:24.627396 2946 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:24.628637 kubelet[2946]: E0304 00:51:24.627435 2946 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-4860195aa5\": node \"ci-4081.3.6-n-4860195aa5\" not found" Mar 4 00:51:24.630909 kubelet[2946]: I0304 00:51:24.629940 2946 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:24.675072 kubelet[2946]: E0304 00:51:24.675040 2946 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-4860195aa5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:24.678318 kubelet[2946]: I0304 00:51:24.675232 2946 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:24.684581 kubelet[2946]: E0304 00:51:24.684543 2946 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-4860195aa5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:24.684581 kubelet[2946]: I0304 00:51:24.684577 2946 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:24.687494 kubelet[2946]: E0304 00:51:24.687465 2946 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-4860195aa5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:25.115245 kubelet[2946]: I0304 00:51:25.114113 2946 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:25.115245 kubelet[2946]: I0304 00:51:25.114221 2946 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:25.124512 kubelet[2946]: I0304 00:51:25.124389 2946 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:51:25.130182 kubelet[2946]: I0304 00:51:25.130045 2946 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:51:26.172229 systemd[1]: Reloading requested from client PID 3228 ('systemctl') (unit session-9.scope)... Mar 4 00:51:26.172611 systemd[1]: Reloading... Mar 4 00:51:26.263334 zram_generator::config[3268]: No configuration found. Mar 4 00:51:26.385020 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:51:26.471929 systemd[1]: Reloading finished in 299 ms. Mar 4 00:51:26.503945 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:51:26.522427 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 00:51:26.522762 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:51:26.531976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:51:26.721814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:51:26.727989 (kubelet)[3342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 00:51:26.773672 kubelet[3342]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 00:51:26.773672 kubelet[3342]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 00:51:26.773672 kubelet[3342]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 00:51:26.774096 kubelet[3342]: I0304 00:51:26.773714 3342 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 00:51:26.786345 kubelet[3342]: I0304 00:51:26.786303 3342 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 00:51:26.786345 kubelet[3342]: I0304 00:51:26.786338 3342 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 00:51:26.786566 kubelet[3342]: I0304 00:51:26.786549 3342 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 00:51:26.789262 kubelet[3342]: I0304 00:51:26.789219 3342 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 00:51:26.791998 kubelet[3342]: I0304 00:51:26.791767 3342 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 00:51:26.794882 kubelet[3342]: E0304 00:51:26.794849 3342 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 00:51:26.795256 kubelet[3342]: I0304 00:51:26.795040 3342 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 00:51:26.800123 kubelet[3342]: I0304 00:51:26.800100 3342 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 00:51:26.800734 kubelet[3342]: I0304 00:51:26.800705 3342 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 00:51:26.800955 kubelet[3342]: I0304 00:51:26.800808 3342 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-4860195aa5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 4 00:51:26.801165 kubelet[3342]: I0304 00:51:26.801072 3342 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 00:51:26.801165 kubelet[3342]: I0304 00:51:26.801085 3342 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 00:51:26.801165 kubelet[3342]: I0304 00:51:26.801137 3342 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:51:26.801617 kubelet[3342]: I0304 00:51:26.801486 3342 kubelet.go:480] "Attempting to sync node with API server" Mar 4 00:51:26.801617 kubelet[3342]: I0304 00:51:26.801526 3342 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 00:51:26.801617 kubelet[3342]: I0304 00:51:26.801556 3342 kubelet.go:386] "Adding apiserver pod source" Mar 4 00:51:26.801617 kubelet[3342]: I0304 00:51:26.801570 3342 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 00:51:26.813671 kubelet[3342]: I0304 00:51:26.813644 3342 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 00:51:26.815328 kubelet[3342]: I0304 00:51:26.814375 3342 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 00:51:26.826152 kubelet[3342]: I0304 00:51:26.826106 3342 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 00:51:26.826152 kubelet[3342]: I0304 00:51:26.826157 3342 server.go:1289] "Started kubelet" Mar 4 00:51:26.828218 kubelet[3342]: I0304 00:51:26.828183 3342 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 00:51:26.837283 kubelet[3342]: I0304 00:51:26.837232 3342 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 00:51:26.838167 kubelet[3342]: I0304 00:51:26.838150 3342 server.go:317] "Adding debug handlers to kubelet server" Mar 4 00:51:26.842057 kubelet[3342]: I0304 00:51:26.839937 3342 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 00:51:26.842057 kubelet[3342]: I0304 00:51:26.841742 3342 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 00:51:26.842057 kubelet[3342]: I0304 00:51:26.841896 3342 reconciler.go:26] "Reconciler: start to sync state" Mar 4 00:51:26.845001 kubelet[3342]: I0304 00:51:26.844721 3342 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 00:51:26.845001 kubelet[3342]: I0304 00:51:26.844910 3342 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 00:51:26.845001 kubelet[3342]: I0304 00:51:26.844927 3342 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 00:51:26.846125 kubelet[3342]: I0304 00:51:26.846101 3342 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 00:51:26.847205 kubelet[3342]: I0304 00:51:26.846104 3342 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 00:51:26.847371 kubelet[3342]: I0304 00:51:26.847351 3342 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 00:51:26.847453 kubelet[3342]: I0304 00:51:26.847443 3342 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 00:51:26.847501 kubelet[3342]: I0304 00:51:26.847494 3342 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 00:51:26.847588 kubelet[3342]: E0304 00:51:26.847572 3342 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 00:51:26.851255 kubelet[3342]: I0304 00:51:26.851230 3342 factory.go:223] Registration of the systemd container factory successfully Mar 4 00:51:26.851889 kubelet[3342]: I0304 00:51:26.851549 3342 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 00:51:26.856802 kubelet[3342]: I0304 00:51:26.855142 3342 factory.go:223] Registration of the containerd container factory successfully Mar 4 00:51:26.916280 kubelet[3342]: I0304 00:51:26.916252 3342 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 00:51:26.916280 kubelet[3342]: I0304 00:51:26.916272 3342 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 00:51:26.916280 kubelet[3342]: I0304 00:51:26.916293 3342 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:51:26.916463 kubelet[3342]: I0304 00:51:26.916445 3342 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 4 00:51:26.916485 kubelet[3342]: I0304 00:51:26.916456 3342 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 4 00:51:26.916485 kubelet[3342]: I0304 00:51:26.916476 3342 policy_none.go:49] "None policy: Start" Mar 4 00:51:26.916526 kubelet[3342]: I0304 00:51:26.916485 3342 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 00:51:26.916526 kubelet[3342]: I0304 00:51:26.916494 3342 state_mem.go:35] "Initializing new in-memory state store" Mar 4 00:51:26.916580 kubelet[3342]: I0304 00:51:26.916568 3342 state_mem.go:75] "Updated machine memory state" Mar 4 00:51:26.917725 kubelet[3342]: E0304 00:51:26.917706 3342 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 00:51:26.919340 kubelet[3342]: I0304 00:51:26.917874 3342 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 00:51:26.919340 kubelet[3342]: I0304 00:51:26.917888 3342 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 00:51:26.919340 kubelet[3342]: I0304 00:51:26.918590 3342 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 00:51:26.921559 kubelet[3342]: E0304 00:51:26.921537 3342 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 00:51:26.948830 kubelet[3342]: I0304 00:51:26.948428 3342 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:26.948830 kubelet[3342]: I0304 00:51:26.948485 3342 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:26.948830 kubelet[3342]: I0304 00:51:26.948693 3342 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:26.959299 kubelet[3342]: I0304 00:51:26.959003 3342 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:51:26.960187 kubelet[3342]: I0304 00:51:26.960158 3342 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:51:26.960260 kubelet[3342]: I0304 00:51:26.960224 3342 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:51:26.960285 kubelet[3342]: E0304 00:51:26.960259 3342 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-4860195aa5\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:26.960386 kubelet[3342]: E0304 00:51:26.960338 3342 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-4860195aa5\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.025132 kubelet[3342]: I0304 00:51:27.025044 3342 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.043014 kubelet[3342]: I0304 00:51:27.042976 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f21e8f31ad8622a2d9ce38c3b1c773aa-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4860195aa5\" (UID: \"f21e8f31ad8622a2d9ce38c3b1c773aa\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.043132 kubelet[3342]: I0304 00:51:27.043022 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f56ef64b52aa3fec4f1c1fcca53d32b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4860195aa5\" (UID: \"9f56ef64b52aa3fec4f1c1fcca53d32b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.043132 kubelet[3342]: I0304 00:51:27.043044 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f56ef64b52aa3fec4f1c1fcca53d32b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-4860195aa5\" (UID: \"9f56ef64b52aa3fec4f1c1fcca53d32b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.043132 kubelet[3342]: I0304 00:51:27.043058 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f56ef64b52aa3fec4f1c1fcca53d32b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4860195aa5\" (UID: \"9f56ef64b52aa3fec4f1c1fcca53d32b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.043132 kubelet[3342]: I0304 00:51:27.043074 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f56ef64b52aa3fec4f1c1fcca53d32b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-4860195aa5\" (UID: \"9f56ef64b52aa3fec4f1c1fcca53d32b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.043132 kubelet[3342]: I0304 00:51:27.043087 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1974f18e0ef44bbf574d777da0d2a8e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-4860195aa5\" (UID: \"d1974f18e0ef44bbf574d777da0d2a8e\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.043269 kubelet[3342]: I0304 00:51:27.043103 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f21e8f31ad8622a2d9ce38c3b1c773aa-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4860195aa5\" (UID: \"f21e8f31ad8622a2d9ce38c3b1c773aa\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.043269 kubelet[3342]: I0304 00:51:27.043116 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f21e8f31ad8622a2d9ce38c3b1c773aa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-4860195aa5\" (UID: \"f21e8f31ad8622a2d9ce38c3b1c773aa\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.043269 kubelet[3342]: I0304 00:51:27.043131 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f56ef64b52aa3fec4f1c1fcca53d32b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-4860195aa5\" (UID: \"9f56ef64b52aa3fec4f1c1fcca53d32b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.047632 kubelet[3342]: I0304 00:51:27.047599 3342 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.047729 kubelet[3342]: I0304 00:51:27.047689 3342 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.803780 kubelet[3342]: I0304 00:51:27.803731 3342 apiserver.go:52] "Watching apiserver" Mar 4 00:51:27.842079 kubelet[3342]: I0304 00:51:27.842019 3342 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 00:51:27.887660 kubelet[3342]: I0304 00:51:27.885675 3342 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.898809 kubelet[3342]: I0304 00:51:27.898579 3342 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:51:27.898809 kubelet[3342]: E0304 00:51:27.898634 3342 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-4860195aa5\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" Mar 4 00:51:27.911807 kubelet[3342]: I0304 00:51:27.911143 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4860195aa5" podStartSLOduration=2.911127905 podStartE2EDuration="2.911127905s" podCreationTimestamp="2026-03-04 00:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:51:27.910928185 +0000 UTC m=+1.179211659" watchObservedRunningTime="2026-03-04 00:51:27.911127905 +0000 UTC m=+1.179411339" Mar 4 00:51:27.938089 kubelet[3342]: I0304 00:51:27.937777 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4860195aa5" podStartSLOduration=2.937748923 podStartE2EDuration="2.937748923s" podCreationTimestamp="2026-03-04 00:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:51:27.923621014 +0000 UTC m=+1.191904488" watchObservedRunningTime="2026-03-04 00:51:27.937748923 +0000 UTC m=+1.206032397" Mar 4 00:51:27.938651 kubelet[3342]: I0304 00:51:27.938487 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4860195aa5" podStartSLOduration=1.9384721219999999 podStartE2EDuration="1.938472122s" podCreationTimestamp="2026-03-04 00:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:51:27.938120522 +0000 UTC m=+1.206403996" watchObservedRunningTime="2026-03-04 00:51:27.938472122 +0000 UTC m=+1.206755636" Mar 4 00:51:34.147669 kubelet[3342]: I0304 00:51:34.147646 3342 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 00:51:34.148408 containerd[1831]: time="2026-03-04T00:51:34.148374023Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 00:51:34.148906 kubelet[3342]: I0304 00:51:34.148742 3342 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 00:51:34.987828 kubelet[3342]: I0304 00:51:34.987669 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fdd43ab-d141-4625-ab7a-030eab3f96e7-kube-proxy\") pod \"kube-proxy-j59cg\" (UID: \"0fdd43ab-d141-4625-ab7a-030eab3f96e7\") " pod="kube-system/kube-proxy-j59cg" Mar 4 00:51:34.987828 kubelet[3342]: I0304 00:51:34.987713 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fdd43ab-d141-4625-ab7a-030eab3f96e7-xtables-lock\") pod \"kube-proxy-j59cg\" (UID: \"0fdd43ab-d141-4625-ab7a-030eab3f96e7\") " pod="kube-system/kube-proxy-j59cg" Mar 4 00:51:34.987828 kubelet[3342]: I0304 00:51:34.987739 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fdd43ab-d141-4625-ab7a-030eab3f96e7-lib-modules\") pod \"kube-proxy-j59cg\" (UID: \"0fdd43ab-d141-4625-ab7a-030eab3f96e7\") " pod="kube-system/kube-proxy-j59cg" Mar 4 00:51:34.987828 kubelet[3342]: I0304 00:51:34.987756 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qb9n\" (UniqueName: \"kubernetes.io/projected/0fdd43ab-d141-4625-ab7a-030eab3f96e7-kube-api-access-2qb9n\") pod \"kube-proxy-j59cg\" (UID: \"0fdd43ab-d141-4625-ab7a-030eab3f96e7\") " pod="kube-system/kube-proxy-j59cg" Mar 4 00:51:35.094972 kubelet[3342]: E0304 00:51:35.094932 3342 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 4 00:51:35.094972 kubelet[3342]: E0304 00:51:35.094964 3342 projected.go:194] Error preparing data for projected volume kube-api-access-2qb9n for pod kube-system/kube-proxy-j59cg: configmap "kube-root-ca.crt" not found Mar 4 00:51:35.095125 kubelet[3342]: E0304 00:51:35.095037 3342 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0fdd43ab-d141-4625-ab7a-030eab3f96e7-kube-api-access-2qb9n podName:0fdd43ab-d141-4625-ab7a-030eab3f96e7 nodeName:}" failed. No retries permitted until 2026-03-04 00:51:35.595016944 +0000 UTC m=+8.863300418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2qb9n" (UniqueName: "kubernetes.io/projected/0fdd43ab-d141-4625-ab7a-030eab3f96e7-kube-api-access-2qb9n") pod "kube-proxy-j59cg" (UID: "0fdd43ab-d141-4625-ab7a-030eab3f96e7") : configmap "kube-root-ca.crt" not found Mar 4 00:51:35.290178 kubelet[3342]: I0304 00:51:35.289109 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55t9x\" (UniqueName: \"kubernetes.io/projected/c369a569-658e-4fc9-8152-4a0c803c1d81-kube-api-access-55t9x\") pod \"tigera-operator-6bf85f8dd-th62h\" (UID: \"c369a569-658e-4fc9-8152-4a0c803c1d81\") " pod="tigera-operator/tigera-operator-6bf85f8dd-th62h" Mar 4 00:51:35.290178 kubelet[3342]: I0304 00:51:35.289150 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c369a569-658e-4fc9-8152-4a0c803c1d81-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-th62h\" (UID: \"c369a569-658e-4fc9-8152-4a0c803c1d81\") " pod="tigera-operator/tigera-operator-6bf85f8dd-th62h" Mar 4 00:51:35.584326 containerd[1831]: time="2026-03-04T00:51:35.584034652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-th62h,Uid:c369a569-658e-4fc9-8152-4a0c803c1d81,Namespace:tigera-operator,Attempt:0,}" Mar 4 00:51:35.623933 containerd[1831]: time="2026-03-04T00:51:35.623573698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:51:35.623933 containerd[1831]: time="2026-03-04T00:51:35.623621818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:51:35.623933 containerd[1831]: time="2026-03-04T00:51:35.623675498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:35.623933 containerd[1831]: time="2026-03-04T00:51:35.623793858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:35.664286 containerd[1831]: time="2026-03-04T00:51:35.664238744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-th62h,Uid:c369a569-658e-4fc9-8152-4a0c803c1d81,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1160272b3b3333b4703750c823b22893ffe0e331f96150324447aa67b8326466\"" Mar 4 00:51:35.666064 containerd[1831]: time="2026-03-04T00:51:35.665878303Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 4 00:51:35.854848 containerd[1831]: time="2026-03-04T00:51:35.854448424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j59cg,Uid:0fdd43ab-d141-4625-ab7a-030eab3f96e7,Namespace:kube-system,Attempt:0,}" Mar 4 00:51:35.895225 containerd[1831]: time="2026-03-04T00:51:35.895148069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:51:35.895432 containerd[1831]: time="2026-03-04T00:51:35.895201629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:51:35.895432 containerd[1831]: time="2026-03-04T00:51:35.895216509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:35.895432 containerd[1831]: time="2026-03-04T00:51:35.895300149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:35.929177 containerd[1831]: time="2026-03-04T00:51:35.929143481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j59cg,Uid:0fdd43ab-d141-4625-ab7a-030eab3f96e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d69ca9840bb358207125e750ae7b63265c2fd03ed7e3f21c82d95d90f1ba9c44\"" Mar 4 00:51:35.939790 containerd[1831]: time="2026-03-04T00:51:35.939747952Z" level=info msg="CreateContainer within sandbox \"d69ca9840bb358207125e750ae7b63265c2fd03ed7e3f21c82d95d90f1ba9c44\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 00:51:35.974626 containerd[1831]: time="2026-03-04T00:51:35.974581562Z" level=info msg="CreateContainer within sandbox \"d69ca9840bb358207125e750ae7b63265c2fd03ed7e3f21c82d95d90f1ba9c44\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dabf334a8b701becf0af819f3d8621fef1e58a08b62cf82b8eab3f71e529ed91\"" Mar 4 00:51:35.975439 containerd[1831]: time="2026-03-04T00:51:35.975413122Z" level=info msg="StartContainer for \"dabf334a8b701becf0af819f3d8621fef1e58a08b62cf82b8eab3f71e529ed91\"" Mar 4 00:51:36.029166 containerd[1831]: time="2026-03-04T00:51:36.029119236Z" level=info msg="StartContainer for \"dabf334a8b701becf0af819f3d8621fef1e58a08b62cf82b8eab3f71e529ed91\" returns successfully" Mar 4 00:51:36.934392 kubelet[3342]: I0304 00:51:36.934059 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j59cg" podStartSLOduration=2.934041353 podStartE2EDuration="2.934041353s" podCreationTimestamp="2026-03-04 00:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:51:36.931996954 +0000 UTC m=+10.200280428" watchObservedRunningTime="2026-03-04 00:51:36.934041353 +0000 UTC m=+10.202324787" Mar 4 00:51:37.429019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2364917663.mount: Deactivated successfully. Mar 4 00:51:37.954337 containerd[1831]: time="2026-03-04T00:51:37.953835092Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:37.957025 containerd[1831]: time="2026-03-04T00:51:37.956995410Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 4 00:51:37.960409 containerd[1831]: time="2026-03-04T00:51:37.960353047Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:37.964928 containerd[1831]: time="2026-03-04T00:51:37.964875843Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:37.965755 containerd[1831]: time="2026-03-04T00:51:37.965726722Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.299816259s" Mar 4 00:51:37.965905 containerd[1831]: time="2026-03-04T00:51:37.965839482Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 4 00:51:37.973773 containerd[1831]: time="2026-03-04T00:51:37.973654716Z" level=info msg="CreateContainer within sandbox \"1160272b3b3333b4703750c823b22893ffe0e331f96150324447aa67b8326466\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 4 00:51:38.004045 containerd[1831]: time="2026-03-04T00:51:38.004000090Z" level=info msg="CreateContainer within sandbox \"1160272b3b3333b4703750c823b22893ffe0e331f96150324447aa67b8326466\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9914712aba145dafe1a689118d0435617a43eed1df41da528568ba3ea7439fb9\"" Mar 4 00:51:38.005766 containerd[1831]: time="2026-03-04T00:51:38.004961769Z" level=info msg="StartContainer for \"9914712aba145dafe1a689118d0435617a43eed1df41da528568ba3ea7439fb9\"" Mar 4 00:51:38.052020 containerd[1831]: time="2026-03-04T00:51:38.051981650Z" level=info msg="StartContainer for \"9914712aba145dafe1a689118d0435617a43eed1df41da528568ba3ea7439fb9\" returns successfully" Mar 4 00:51:38.921316 kubelet[3342]: I0304 00:51:38.921247 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-th62h" podStartSLOduration=1.620376137 podStartE2EDuration="3.921231716s" podCreationTimestamp="2026-03-04 00:51:35 +0000 UTC" firstStartedPulling="2026-03-04 00:51:35.665620783 +0000 UTC m=+8.933904257" lastFinishedPulling="2026-03-04 00:51:37.966476362 +0000 UTC m=+11.234759836" observedRunningTime="2026-03-04 00:51:38.920746517 +0000 UTC m=+12.189029991" watchObservedRunningTime="2026-03-04 00:51:38.921231716 +0000 UTC m=+12.189515190" Mar 4 00:51:44.033202 sudo[2352]: pam_unix(sudo:session): session closed for user root Mar 4 00:51:44.112983 sshd[2348]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:44.121064 systemd-logind[1769]: Session 9 logged out. Waiting for processes to exit. Mar 4 00:51:44.122116 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:58658.service: Deactivated successfully. Mar 4 00:51:44.124710 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 00:51:44.125697 systemd-logind[1769]: Removed session 9. Mar 4 00:51:48.967343 kubelet[3342]: I0304 00:51:48.967243 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/34836248-b592-4d0e-a127-2408ac7ff692-typha-certs\") pod \"calico-typha-64df5d8d74-dlxcn\" (UID: \"34836248-b592-4d0e-a127-2408ac7ff692\") " pod="calico-system/calico-typha-64df5d8d74-dlxcn" Mar 4 00:51:48.967343 kubelet[3342]: I0304 00:51:48.967344 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34836248-b592-4d0e-a127-2408ac7ff692-tigera-ca-bundle\") pod \"calico-typha-64df5d8d74-dlxcn\" (UID: \"34836248-b592-4d0e-a127-2408ac7ff692\") " pod="calico-system/calico-typha-64df5d8d74-dlxcn" Mar 4 00:51:48.968018 kubelet[3342]: I0304 00:51:48.967371 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dcv4\" (UniqueName: \"kubernetes.io/projected/34836248-b592-4d0e-a127-2408ac7ff692-kube-api-access-6dcv4\") pod \"calico-typha-64df5d8d74-dlxcn\" (UID: \"34836248-b592-4d0e-a127-2408ac7ff692\") " pod="calico-system/calico-typha-64df5d8d74-dlxcn" Mar 4 00:51:49.070330 kubelet[3342]: I0304 00:51:49.067855 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-bpffs\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070330 kubelet[3342]: I0304 00:51:49.067898 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-cni-bin-dir\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070330 kubelet[3342]: I0304 00:51:49.067918 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-cni-net-dir\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070330 kubelet[3342]: I0304 00:51:49.067936 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t58qq\" (UniqueName: \"kubernetes.io/projected/3c98c409-d55c-48e5-8272-cc1a8390f46a-kube-api-access-t58qq\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070330 kubelet[3342]: I0304 00:51:49.067955 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-flexvol-driver-host\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070577 kubelet[3342]: I0304 00:51:49.067971 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-cni-log-dir\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070577 kubelet[3342]: I0304 00:51:49.067987 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-policysync\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070577 kubelet[3342]: I0304 00:51:49.068015 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3c98c409-d55c-48e5-8272-cc1a8390f46a-node-certs\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070577 kubelet[3342]: I0304 00:51:49.068031 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-var-run-calico\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070577 kubelet[3342]: I0304 00:51:49.068047 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-lib-modules\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070684 kubelet[3342]: I0304 00:51:49.068063 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-nodeproc\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070684 kubelet[3342]: I0304 00:51:49.068091 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-sys-fs\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070684 kubelet[3342]: I0304 00:51:49.068110 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c98c409-d55c-48e5-8272-cc1a8390f46a-tigera-ca-bundle\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070684 kubelet[3342]: I0304 00:51:49.068125 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-xtables-lock\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.070684 kubelet[3342]: I0304 00:51:49.068152 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c98c409-d55c-48e5-8272-cc1a8390f46a-var-lib-calico\") pod \"calico-node-l4hvg\" (UID: \"3c98c409-d55c-48e5-8272-cc1a8390f46a\") " pod="calico-system/calico-node-l4hvg" Mar 4 00:51:49.180770 kubelet[3342]: E0304 00:51:49.180287 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.182703 kubelet[3342]: W0304 00:51:49.180856 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.182703 kubelet[3342]: E0304 00:51:49.180893 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.187324 kubelet[3342]: E0304 00:51:49.185700 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4l6w" podUID="f23c515e-4b9c-4719-aa7e-6cc7d093c864" Mar 4 00:51:49.203885 kubelet[3342]: E0304 00:51:49.203841 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.203885 kubelet[3342]: W0304 00:51:49.203872 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.204029 kubelet[3342]: E0304 00:51:49.203895 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.266030 kubelet[3342]: E0304 00:51:49.265932 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.266285 kubelet[3342]: W0304 00:51:49.266159 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.266285 kubelet[3342]: E0304 00:51:49.266188 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.266669 kubelet[3342]: E0304 00:51:49.266609 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.266909 kubelet[3342]: W0304 00:51:49.266622 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.266909 kubelet[3342]: E0304 00:51:49.266764 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.267665 kubelet[3342]: E0304 00:51:49.267652 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.267969 kubelet[3342]: W0304 00:51:49.267795 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.267969 kubelet[3342]: E0304 00:51:49.267811 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.269912 kubelet[3342]: E0304 00:51:49.269494 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.269912 kubelet[3342]: W0304 00:51:49.269507 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.269912 kubelet[3342]: E0304 00:51:49.269519 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.270792 kubelet[3342]: E0304 00:51:49.270611 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.270792 kubelet[3342]: W0304 00:51:49.270628 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.270792 kubelet[3342]: E0304 00:51:49.270639 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.271603 kubelet[3342]: E0304 00:51:49.271584 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.271603 kubelet[3342]: W0304 00:51:49.271602 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.271679 kubelet[3342]: E0304 00:51:49.271615 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.271785 kubelet[3342]: E0304 00:51:49.271769 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.271785 kubelet[3342]: W0304 00:51:49.271782 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.271841 kubelet[3342]: E0304 00:51:49.271792 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.272089 kubelet[3342]: E0304 00:51:49.272037 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.272089 kubelet[3342]: W0304 00:51:49.272050 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.272089 kubelet[3342]: E0304 00:51:49.272061 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.272504 kubelet[3342]: E0304 00:51:49.272284 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.272504 kubelet[3342]: W0304 00:51:49.272298 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.272504 kubelet[3342]: E0304 00:51:49.272315 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.272504 kubelet[3342]: E0304 00:51:49.272444 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.272504 kubelet[3342]: W0304 00:51:49.272451 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.272504 kubelet[3342]: E0304 00:51:49.272461 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.273596 kubelet[3342]: E0304 00:51:49.272592 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.273596 kubelet[3342]: W0304 00:51:49.272598 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.273596 kubelet[3342]: E0304 00:51:49.272606 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.273596 kubelet[3342]: E0304 00:51:49.272822 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.273596 kubelet[3342]: W0304 00:51:49.272844 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.273596 kubelet[3342]: E0304 00:51:49.272856 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.273596 kubelet[3342]: E0304 00:51:49.273041 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.273596 kubelet[3342]: W0304 00:51:49.273050 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.273596 kubelet[3342]: E0304 00:51:49.273058 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.273596 kubelet[3342]: E0304 00:51:49.273184 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.274663 kubelet[3342]: W0304 00:51:49.273191 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.274663 kubelet[3342]: E0304 00:51:49.273198 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.274663 kubelet[3342]: E0304 00:51:49.273312 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.274663 kubelet[3342]: W0304 00:51:49.273319 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.274663 kubelet[3342]: E0304 00:51:49.273335 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.274663 kubelet[3342]: E0304 00:51:49.273458 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.274663 kubelet[3342]: W0304 00:51:49.273467 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.274663 kubelet[3342]: E0304 00:51:49.273475 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.274663 kubelet[3342]: E0304 00:51:49.273638 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.274663 kubelet[3342]: W0304 00:51:49.273646 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.275851 kubelet[3342]: E0304 00:51:49.273653 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.275851 kubelet[3342]: E0304 00:51:49.273760 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.275851 kubelet[3342]: W0304 00:51:49.273766 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.275851 kubelet[3342]: E0304 00:51:49.273773 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.275851 kubelet[3342]: E0304 00:51:49.273883 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.275851 kubelet[3342]: W0304 00:51:49.273889 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.275851 kubelet[3342]: E0304 00:51:49.273896 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.275851 kubelet[3342]: E0304 00:51:49.274001 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.275851 kubelet[3342]: W0304 00:51:49.274007 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.275851 kubelet[3342]: E0304 00:51:49.274013 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.277061 kubelet[3342]: E0304 00:51:49.274254 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.277061 kubelet[3342]: W0304 00:51:49.274263 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.277061 kubelet[3342]: E0304 00:51:49.274271 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.277061 kubelet[3342]: I0304 00:51:49.274299 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f23c515e-4b9c-4719-aa7e-6cc7d093c864-varrun\") pod \"csi-node-driver-b4l6w\" (UID: \"f23c515e-4b9c-4719-aa7e-6cc7d093c864\") " pod="calico-system/csi-node-driver-b4l6w" Mar 4 00:51:49.277061 kubelet[3342]: E0304 00:51:49.274508 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.277061 kubelet[3342]: W0304 00:51:49.274517 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.277061 kubelet[3342]: E0304 00:51:49.274526 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.277061 kubelet[3342]: E0304 00:51:49.274647 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.277061 kubelet[3342]: W0304 00:51:49.274654 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.277249 containerd[1831]: time="2026-03-04T00:51:49.276995892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64df5d8d74-dlxcn,Uid:34836248-b592-4d0e-a127-2408ac7ff692,Namespace:calico-system,Attempt:0,}" Mar 4 00:51:49.277714 kubelet[3342]: E0304 00:51:49.274661 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.277714 kubelet[3342]: E0304 00:51:49.274781 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.277714 kubelet[3342]: W0304 00:51:49.274788 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.277714 kubelet[3342]: E0304 00:51:49.274794 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.277714 kubelet[3342]: I0304 00:51:49.274810 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f23c515e-4b9c-4719-aa7e-6cc7d093c864-kubelet-dir\") pod \"csi-node-driver-b4l6w\" (UID: \"f23c515e-4b9c-4719-aa7e-6cc7d093c864\") " pod="calico-system/csi-node-driver-b4l6w" Mar 4 00:51:49.277714 kubelet[3342]: E0304 00:51:49.274939 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.277714 kubelet[3342]: W0304 00:51:49.274946 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.277714 kubelet[3342]: E0304 00:51:49.274954 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.277881 kubelet[3342]: I0304 00:51:49.274970 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8264g\" (UniqueName: \"kubernetes.io/projected/f23c515e-4b9c-4719-aa7e-6cc7d093c864-kube-api-access-8264g\") pod \"csi-node-driver-b4l6w\" (UID: \"f23c515e-4b9c-4719-aa7e-6cc7d093c864\") " pod="calico-system/csi-node-driver-b4l6w" Mar 4 00:51:49.277881 kubelet[3342]: E0304 00:51:49.275094 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.277881 kubelet[3342]: W0304 00:51:49.275101 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.277881 kubelet[3342]: E0304 00:51:49.275108 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.277881 kubelet[3342]: I0304 00:51:49.275120 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f23c515e-4b9c-4719-aa7e-6cc7d093c864-registration-dir\") pod \"csi-node-driver-b4l6w\" (UID: \"f23c515e-4b9c-4719-aa7e-6cc7d093c864\") " pod="calico-system/csi-node-driver-b4l6w" Mar 4 00:51:49.277881 kubelet[3342]: E0304 00:51:49.275232 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.277881 kubelet[3342]: W0304 00:51:49.275239 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.277881 kubelet[3342]: E0304 00:51:49.275246 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.278068 kubelet[3342]: I0304 00:51:49.275260 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f23c515e-4b9c-4719-aa7e-6cc7d093c864-socket-dir\") pod \"csi-node-driver-b4l6w\" (UID: \"f23c515e-4b9c-4719-aa7e-6cc7d093c864\") " pod="calico-system/csi-node-driver-b4l6w" Mar 4 00:51:49.278068 kubelet[3342]: E0304 00:51:49.275399 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.278068 kubelet[3342]: W0304 00:51:49.275408 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.278068 kubelet[3342]: E0304 00:51:49.275416 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.278068 kubelet[3342]: E0304 00:51:49.275528 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.278068 kubelet[3342]: W0304 00:51:49.275535 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.278068 kubelet[3342]: E0304 00:51:49.275542 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.278068 kubelet[3342]: E0304 00:51:49.275664 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.278068 kubelet[3342]: W0304 00:51:49.275670 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.278247 kubelet[3342]: E0304 00:51:49.275677 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.278247 kubelet[3342]: E0304 00:51:49.275780 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.278247 kubelet[3342]: W0304 00:51:49.275786 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.278247 kubelet[3342]: E0304 00:51:49.275793 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.278247 kubelet[3342]: E0304 00:51:49.275909 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.278247 kubelet[3342]: W0304 00:51:49.275915 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.278247 kubelet[3342]: E0304 00:51:49.275922 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.278247 kubelet[3342]: E0304 00:51:49.276054 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.278247 kubelet[3342]: W0304 00:51:49.276064 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.278247 kubelet[3342]: E0304 00:51:49.276072 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.278470 kubelet[3342]: E0304 00:51:49.276205 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.278470 kubelet[3342]: W0304 00:51:49.276212 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.278470 kubelet[3342]: E0304 00:51:49.276221 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.278470 kubelet[3342]: E0304 00:51:49.276348 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.278470 kubelet[3342]: W0304 00:51:49.276356 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.278470 kubelet[3342]: E0304 00:51:49.276363 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.325591 containerd[1831]: time="2026-03-04T00:51:49.325491693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:51:49.325591 containerd[1831]: time="2026-03-04T00:51:49.325567333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:51:49.326380 containerd[1831]: time="2026-03-04T00:51:49.326174973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:49.326545 containerd[1831]: time="2026-03-04T00:51:49.326493813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:49.358381 containerd[1831]: time="2026-03-04T00:51:49.358274598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l4hvg,Uid:3c98c409-d55c-48e5-8272-cc1a8390f46a,Namespace:calico-system,Attempt:0,}" Mar 4 00:51:49.374696 containerd[1831]: time="2026-03-04T00:51:49.374656950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64df5d8d74-dlxcn,Uid:34836248-b592-4d0e-a127-2408ac7ff692,Namespace:calico-system,Attempt:0,} returns sandbox id \"e49c298c76703253bba846432b8ab04b4e1aca0e6bc5ac42062fe404df2b82a8\"" Mar 4 00:51:49.376250 containerd[1831]: time="2026-03-04T00:51:49.376210149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 4 00:51:49.377764 kubelet[3342]: E0304 00:51:49.377649 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.377764 kubelet[3342]: W0304 00:51:49.377673 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.377764 kubelet[3342]: E0304 00:51:49.377737 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.378265 kubelet[3342]: E0304 00:51:49.378201 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.378265 kubelet[3342]: W0304 00:51:49.378213 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.378265 kubelet[3342]: E0304 00:51:49.378224 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.378756 kubelet[3342]: E0304 00:51:49.378659 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.378756 kubelet[3342]: W0304 00:51:49.378674 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.378756 kubelet[3342]: E0304 00:51:49.378685 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.378986 kubelet[3342]: E0304 00:51:49.378886 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.378986 kubelet[3342]: W0304 00:51:49.378897 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.378986 kubelet[3342]: E0304 00:51:49.378905 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.379355 kubelet[3342]: E0304 00:51:49.379048 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.379355 kubelet[3342]: W0304 00:51:49.379198 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.379355 kubelet[3342]: E0304 00:51:49.379209 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.381071 kubelet[3342]: E0304 00:51:49.379526 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.381071 kubelet[3342]: W0304 00:51:49.379535 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.381071 kubelet[3342]: E0304 00:51:49.379559 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.381071 kubelet[3342]: E0304 00:51:49.379818 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.381071 kubelet[3342]: W0304 00:51:49.379827 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.381071 kubelet[3342]: E0304 00:51:49.379836 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.381071 kubelet[3342]: E0304 00:51:49.379995 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.381071 kubelet[3342]: W0304 00:51:49.380014 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.381071 kubelet[3342]: E0304 00:51:49.380025 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.381071 kubelet[3342]: E0304 00:51:49.380196 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.381281 kubelet[3342]: W0304 00:51:49.380204 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.381281 kubelet[3342]: E0304 00:51:49.380212 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.381281 kubelet[3342]: E0304 00:51:49.380450 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.381281 kubelet[3342]: W0304 00:51:49.380460 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.381281 kubelet[3342]: E0304 00:51:49.380473 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.381281 kubelet[3342]: E0304 00:51:49.380619 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.381281 kubelet[3342]: W0304 00:51:49.380627 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.381281 kubelet[3342]: E0304 00:51:49.380636 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.381281 kubelet[3342]: E0304 00:51:49.380793 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.381281 kubelet[3342]: W0304 00:51:49.380801 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.382089 kubelet[3342]: E0304 00:51:49.380813 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.382089 kubelet[3342]: E0304 00:51:49.381404 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.382089 kubelet[3342]: W0304 00:51:49.381415 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.382089 kubelet[3342]: E0304 00:51:49.381425 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.382089 kubelet[3342]: E0304 00:51:49.381611 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.382089 kubelet[3342]: W0304 00:51:49.381620 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.382089 kubelet[3342]: E0304 00:51:49.381628 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.382530 kubelet[3342]: E0304 00:51:49.382138 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.382530 kubelet[3342]: W0304 00:51:49.382147 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.382530 kubelet[3342]: E0304 00:51:49.382159 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.382895 kubelet[3342]: E0304 00:51:49.382646 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.382895 kubelet[3342]: W0304 00:51:49.382656 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.382895 kubelet[3342]: E0304 00:51:49.382666 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.382895 kubelet[3342]: E0304 00:51:49.382816 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.382895 kubelet[3342]: W0304 00:51:49.382824 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.382895 kubelet[3342]: E0304 00:51:49.382832 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.383297 kubelet[3342]: E0304 00:51:49.383263 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.383297 kubelet[3342]: W0304 00:51:49.383276 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.383297 kubelet[3342]: E0304 00:51:49.383286 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.383992 kubelet[3342]: E0304 00:51:49.383976 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.384431 kubelet[3342]: W0304 00:51:49.384286 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.384431 kubelet[3342]: E0304 00:51:49.384351 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.384570 kubelet[3342]: E0304 00:51:49.384559 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.384624 kubelet[3342]: W0304 00:51:49.384615 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.384676 kubelet[3342]: E0304 00:51:49.384667 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.384955 kubelet[3342]: E0304 00:51:49.384923 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.384955 kubelet[3342]: W0304 00:51:49.384933 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.384955 kubelet[3342]: E0304 00:51:49.384943 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.385390 kubelet[3342]: E0304 00:51:49.385238 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.385390 kubelet[3342]: W0304 00:51:49.385248 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.385390 kubelet[3342]: E0304 00:51:49.385258 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.385749 kubelet[3342]: E0304 00:51:49.385662 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.385749 kubelet[3342]: W0304 00:51:49.385674 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.385749 kubelet[3342]: E0304 00:51:49.385684 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.386063 kubelet[3342]: E0304 00:51:49.385979 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.386063 kubelet[3342]: W0304 00:51:49.385989 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.386063 kubelet[3342]: E0304 00:51:49.385998 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.386567 kubelet[3342]: E0304 00:51:49.386315 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.386567 kubelet[3342]: W0304 00:51:49.386326 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.386567 kubelet[3342]: E0304 00:51:49.386336 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.398585 kubelet[3342]: E0304 00:51:49.398560 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:51:49.398916 kubelet[3342]: W0304 00:51:49.398632 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:51:49.398916 kubelet[3342]: E0304 00:51:49.398653 3342 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:51:49.409923 containerd[1831]: time="2026-03-04T00:51:49.409723693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:51:49.409923 containerd[1831]: time="2026-03-04T00:51:49.409786013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:51:49.409923 containerd[1831]: time="2026-03-04T00:51:49.409798893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:49.410192 containerd[1831]: time="2026-03-04T00:51:49.410108893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:51:49.446425 containerd[1831]: time="2026-03-04T00:51:49.446371716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l4hvg,Uid:3c98c409-d55c-48e5-8272-cc1a8390f46a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed509718901211238bdfd93c7294aed18b2b09e611c7e32fca1a9d1602a8dc69\"" Mar 4 00:51:50.684815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount43646965.mount: Deactivated successfully. Mar 4 00:51:50.849679 kubelet[3342]: E0304 00:51:50.849640 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4l6w" podUID="f23c515e-4b9c-4719-aa7e-6cc7d093c864" Mar 4 00:51:51.717347 containerd[1831]: time="2026-03-04T00:51:51.716641722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:51.719557 containerd[1831]: time="2026-03-04T00:51:51.719401241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 4 00:51:51.722700 containerd[1831]: time="2026-03-04T00:51:51.722631119Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:51.727505 containerd[1831]: time="2026-03-04T00:51:51.727463517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:51.728404 containerd[1831]: time="2026-03-04T00:51:51.728370997Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.352126608s" Mar 4 00:51:51.728501 containerd[1831]: time="2026-03-04T00:51:51.728486277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 4 00:51:51.730504 containerd[1831]: time="2026-03-04T00:51:51.730040796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 4 00:51:51.793911 containerd[1831]: time="2026-03-04T00:51:51.793872766Z" level=info msg="CreateContainer within sandbox \"e49c298c76703253bba846432b8ab04b4e1aca0e6bc5ac42062fe404df2b82a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 4 00:51:51.882170 containerd[1831]: time="2026-03-04T00:51:51.882022764Z" level=info msg="CreateContainer within sandbox \"e49c298c76703253bba846432b8ab04b4e1aca0e6bc5ac42062fe404df2b82a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"30e05897d71afe944ef555e33483547e5ecaa69a158ed66347f75a1d3b7c7df8\"" Mar 4 00:51:51.883337 containerd[1831]: time="2026-03-04T00:51:51.882779924Z" level=info msg="StartContainer for \"30e05897d71afe944ef555e33483547e5ecaa69a158ed66347f75a1d3b7c7df8\"" Mar 4 00:51:51.992459 containerd[1831]: time="2026-03-04T00:51:51.990843873Z" level=info msg="StartContainer for \"30e05897d71afe944ef555e33483547e5ecaa69a158ed66347f75a1d3b7c7df8\" returns successfully" Mar 4 00:51:52.849371 kubelet[3342]: E0304 00:51:52.848171 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4l6w" podUID="f23c515e-4b9c-4719-aa7e-6cc7d093c864" Mar 4 00:51:52.970871 containerd[1831]: time="2026-03-04T00:51:52.970110089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:52.972801 containerd[1831]: time="2026-03-04T00:51:52.972769808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 4 00:51:52.975871 containerd[1831]: time="2026-03-04T00:51:52.975846367Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:52.981552 containerd[1831]: time="2026-03-04T00:51:52.981326204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:51:52.982017 containerd[1831]: time="2026-03-04T00:51:52.981985004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.251900528s" Mar 4 00:51:52.982058 containerd[1831]: time="2026-03-04T00:51:52.982017804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 4 00:51:52.990353 containerd[1831]: time="2026-03-04T00:51:52.990318520Z" level=info msg="CreateContainer within sandbox \"ed509718901211238bdfd93c7294aed18b2b09e611c7e32fca1a9d1602a8dc69\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 4 00:51:53.014497 kubelet[3342]: I0304 00:51:53.014035 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64df5d8d74-dlxcn" podStartSLOduration=2.660209942 podStartE2EDuration="5.014020869s" podCreationTimestamp="2026-03-04 00:51:48 +0000 UTC" firstStartedPulling="2026-03-04 00:51:49.375670349 +0000 UTC m=+22.643953783" lastFinishedPulling="2026-03-04 00:51:51.729481236 +0000 UTC m=+24.997764710" observedRunningTime="2026-03-04 00:51:53.013901549 +0000 UTC m=+26.282185023" watchObservedRunningTime="2026-03-04 00:51:53.014020869 +0000 UTC m=+26.282304343" Mar 4 00:51:53.026931 containerd[1831]: time="2026-03-04T00:51:53.026813103Z" level=info msg="CreateContainer within sandbox \"ed509718901211238bdfd93c7294aed18b2b09e611c7e32fca1a9d1602a8dc69\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"798d833b870ce52d39cfaa16492f136189f8952504134cf125c1bc65f6916187\"" Mar 4 00:51:53.029979 containerd[1831]: time="2026-03-04T00:51:53.028924982Z" level=info msg="StartContainer for \"798d833b870ce52d39cfaa16492f136189f8952504134cf125c1bc65f6916187\"" Mar 4 00:51:53.088785 containerd[1831]: time="2026-03-04T00:51:53.088623713Z" level=info msg="StartContainer for \"798d833b870ce52d39cfaa16492f136189f8952504134cf125c1bc65f6916187\" returns successfully" Mar 4 00:51:53.122239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-798d833b870ce52d39cfaa16492f136189f8952504134cf125c1bc65f6916187-rootfs.mount: Deactivated successfully. Mar 4 00:51:54.000993 kubelet[3342]: I0304 00:51:54.000222 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:51:54.242009 containerd[1831]: time="2026-03-04T00:51:54.241934888Z" level=info msg="shim disconnected" id=798d833b870ce52d39cfaa16492f136189f8952504134cf125c1bc65f6916187 namespace=k8s.io Mar 4 00:51:54.242688 containerd[1831]: time="2026-03-04T00:51:54.242337008Z" level=warning msg="cleaning up after shim disconnected" id=798d833b870ce52d39cfaa16492f136189f8952504134cf125c1bc65f6916187 namespace=k8s.io Mar 4 00:51:54.242688 containerd[1831]: time="2026-03-04T00:51:54.242355008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:51:54.252777 containerd[1831]: time="2026-03-04T00:51:54.252658603Z" level=warning msg="cleanup warnings time=\"2026-03-04T00:51:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 4 00:51:54.849461 kubelet[3342]: E0304 00:51:54.848707 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4l6w" podUID="f23c515e-4b9c-4719-aa7e-6cc7d093c864" Mar 4 00:51:55.005147 containerd[1831]: time="2026-03-04T00:51:55.004665527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 4 00:51:56.850217 kubelet[3342]: E0304 00:51:56.849142 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4l6w" podUID="f23c515e-4b9c-4719-aa7e-6cc7d093c864" Mar 4 00:51:58.848917 kubelet[3342]: E0304 00:51:58.848874 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4l6w" podUID="f23c515e-4b9c-4719-aa7e-6cc7d093c864" Mar 4 00:51:59.153242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293018651.mount: Deactivated successfully. Mar 4 00:52:00.132207 containerd[1831]: time="2026-03-04T00:52:00.132154458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:00.136341 containerd[1831]: time="2026-03-04T00:52:00.136286135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 4 00:52:00.140072 containerd[1831]: time="2026-03-04T00:52:00.139566052Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:00.144243 containerd[1831]: time="2026-03-04T00:52:00.144183967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:00.145024 containerd[1831]: time="2026-03-04T00:52:00.144792327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 5.1398722s" Mar 4 00:52:00.145024 containerd[1831]: time="2026-03-04T00:52:00.144824727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 4 00:52:00.153570 containerd[1831]: time="2026-03-04T00:52:00.153528839Z" level=info msg="CreateContainer within sandbox \"ed509718901211238bdfd93c7294aed18b2b09e611c7e32fca1a9d1602a8dc69\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 4 00:52:00.192569 containerd[1831]: time="2026-03-04T00:52:00.192515404Z" level=info msg="CreateContainer within sandbox \"ed509718901211238bdfd93c7294aed18b2b09e611c7e32fca1a9d1602a8dc69\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"096ce44ba2248c61c5d535f0726c2a0c278c02ba2d4f0a1184caab7738e929ad\"" Mar 4 00:52:00.193644 containerd[1831]: time="2026-03-04T00:52:00.193412923Z" level=info msg="StartContainer for \"096ce44ba2248c61c5d535f0726c2a0c278c02ba2d4f0a1184caab7738e929ad\"" Mar 4 00:52:00.251856 containerd[1831]: time="2026-03-04T00:52:00.251800351Z" level=info msg="StartContainer for \"096ce44ba2248c61c5d535f0726c2a0c278c02ba2d4f0a1184caab7738e929ad\" returns successfully" Mar 4 00:52:00.302117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-096ce44ba2248c61c5d535f0726c2a0c278c02ba2d4f0a1184caab7738e929ad-rootfs.mount: Deactivated successfully. Mar 4 00:52:00.848634 kubelet[3342]: E0304 00:52:00.848263 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4l6w" podUID="f23c515e-4b9c-4719-aa7e-6cc7d093c864" Mar 4 00:52:01.053346 containerd[1831]: time="2026-03-04T00:52:01.051490157Z" level=info msg="shim disconnected" id=096ce44ba2248c61c5d535f0726c2a0c278c02ba2d4f0a1184caab7738e929ad namespace=k8s.io Mar 4 00:52:01.053346 containerd[1831]: time="2026-03-04T00:52:01.051553117Z" level=warning msg="cleaning up after shim disconnected" id=096ce44ba2248c61c5d535f0726c2a0c278c02ba2d4f0a1184caab7738e929ad namespace=k8s.io Mar 4 00:52:01.053346 containerd[1831]: time="2026-03-04T00:52:01.051562756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:52:01.342448 kubelet[3342]: I0304 00:52:01.342417 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:52:02.020707 containerd[1831]: time="2026-03-04T00:52:02.020467251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 4 00:52:02.849549 kubelet[3342]: E0304 00:52:02.848512 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4l6w" podUID="f23c515e-4b9c-4719-aa7e-6cc7d093c864" Mar 4 00:52:04.510711 containerd[1831]: time="2026-03-04T00:52:04.510660905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:04.513437 containerd[1831]: time="2026-03-04T00:52:04.513155023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 4 00:52:04.520391 containerd[1831]: time="2026-03-04T00:52:04.519686777Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:04.527787 containerd[1831]: time="2026-03-04T00:52:04.527739850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:04.528478 containerd[1831]: time="2026-03-04T00:52:04.528442889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.507936438s" Mar 4 00:52:04.528539 containerd[1831]: time="2026-03-04T00:52:04.528482809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 4 00:52:04.537222 containerd[1831]: time="2026-03-04T00:52:04.537180161Z" level=info msg="CreateContainer within sandbox \"ed509718901211238bdfd93c7294aed18b2b09e611c7e32fca1a9d1602a8dc69\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 4 00:52:04.570154 containerd[1831]: time="2026-03-04T00:52:04.570109052Z" level=info msg="CreateContainer within sandbox \"ed509718901211238bdfd93c7294aed18b2b09e611c7e32fca1a9d1602a8dc69\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2f0d63045f0e4b5243d0160235a78d1bf0455496a8e890373783f7b67ef5821b\"" Mar 4 00:52:04.571069 containerd[1831]: time="2026-03-04T00:52:04.571047851Z" level=info msg="StartContainer for \"2f0d63045f0e4b5243d0160235a78d1bf0455496a8e890373783f7b67ef5821b\"" Mar 4 00:52:04.631821 containerd[1831]: time="2026-03-04T00:52:04.631779397Z" level=info msg="StartContainer for \"2f0d63045f0e4b5243d0160235a78d1bf0455496a8e890373783f7b67ef5821b\" returns successfully" Mar 4 00:52:04.850599 kubelet[3342]: E0304 00:52:04.849556 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4l6w" podUID="f23c515e-4b9c-4719-aa7e-6cc7d093c864" Mar 4 00:52:06.849745 kubelet[3342]: E0304 00:52:06.848792 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4l6w" podUID="f23c515e-4b9c-4719-aa7e-6cc7d093c864" Mar 4 00:52:06.903492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f0d63045f0e4b5243d0160235a78d1bf0455496a8e890373783f7b67ef5821b-rootfs.mount: Deactivated successfully. Mar 4 00:52:06.912964 containerd[1831]: time="2026-03-04T00:52:06.912891196Z" level=info msg="shim disconnected" id=2f0d63045f0e4b5243d0160235a78d1bf0455496a8e890373783f7b67ef5821b namespace=k8s.io Mar 4 00:52:06.912964 containerd[1831]: time="2026-03-04T00:52:06.912960836Z" level=warning msg="cleaning up after shim disconnected" id=2f0d63045f0e4b5243d0160235a78d1bf0455496a8e890373783f7b67ef5821b namespace=k8s.io Mar 4 00:52:06.912964 containerd[1831]: time="2026-03-04T00:52:06.912969196Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:52:06.929139 kubelet[3342]: I0304 00:52:06.929003 3342 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 4 00:52:06.999770 kubelet[3342]: I0304 00:52:06.999569 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17adcf2c-ca3d-47de-a484-e9ed33024310-config-volume\") pod \"coredns-674b8bbfcf-vh8pq\" (UID: \"17adcf2c-ca3d-47de-a484-e9ed33024310\") " pod="kube-system/coredns-674b8bbfcf-vh8pq" Mar 4 00:52:06.999770 kubelet[3342]: I0304 00:52:06.999613 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf2mn\" (UniqueName: \"kubernetes.io/projected/17adcf2c-ca3d-47de-a484-e9ed33024310-kube-api-access-nf2mn\") pod \"coredns-674b8bbfcf-vh8pq\" (UID: \"17adcf2c-ca3d-47de-a484-e9ed33024310\") " pod="kube-system/coredns-674b8bbfcf-vh8pq" Mar 4 00:52:07.096083 containerd[1831]: time="2026-03-04T00:52:07.095368438Z" level=info msg="CreateContainer within sandbox \"ed509718901211238bdfd93c7294aed18b2b09e611c7e32fca1a9d1602a8dc69\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 4 00:52:07.100068 kubelet[3342]: I0304 00:52:07.099958 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjxdq\" (UniqueName: \"kubernetes.io/projected/5eb4339c-7657-4027-8dc5-7b1105a7a5ec-kube-api-access-vjxdq\") pod \"calico-apiserver-7c45f96d9c-wqfg6\" (UID: \"5eb4339c-7657-4027-8dc5-7b1105a7a5ec\") " pod="calico-system/calico-apiserver-7c45f96d9c-wqfg6" Mar 4 00:52:07.100395 kubelet[3342]: I0304 00:52:07.100200 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee329dba-1352-4c5a-bb12-ba0f81ec0c1c-config\") pod \"goldmane-5b85766d88-26kr2\" (UID: \"ee329dba-1352-4c5a-bb12-ba0f81ec0c1c\") " pod="calico-system/goldmane-5b85766d88-26kr2" Mar 4 00:52:07.100395 kubelet[3342]: I0304 00:52:07.100227 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ee329dba-1352-4c5a-bb12-ba0f81ec0c1c-goldmane-key-pair\") pod \"goldmane-5b85766d88-26kr2\" (UID: \"ee329dba-1352-4c5a-bb12-ba0f81ec0c1c\") " pod="calico-system/goldmane-5b85766d88-26kr2" Mar 4 00:52:07.100395 kubelet[3342]: I0304 00:52:07.100243 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/faab35c8-45b0-4065-83ed-922bb968ad92-calico-apiserver-certs\") pod \"calico-apiserver-7c45f96d9c-8mm4l\" (UID: \"faab35c8-45b0-4065-83ed-922bb968ad92\") " pod="calico-system/calico-apiserver-7c45f96d9c-8mm4l" Mar 4 00:52:07.100395 kubelet[3342]: I0304 00:52:07.100271 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w6gq\" (UniqueName: \"kubernetes.io/projected/faab35c8-45b0-4065-83ed-922bb968ad92-kube-api-access-9w6gq\") pod \"calico-apiserver-7c45f96d9c-8mm4l\" (UID: \"faab35c8-45b0-4065-83ed-922bb968ad92\") " pod="calico-system/calico-apiserver-7c45f96d9c-8mm4l" Mar 4 00:52:07.100395 kubelet[3342]: I0304 00:52:07.100292 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv66c\" (UniqueName: \"kubernetes.io/projected/ee329dba-1352-4c5a-bb12-ba0f81ec0c1c-kube-api-access-dv66c\") pod \"goldmane-5b85766d88-26kr2\" (UID: \"ee329dba-1352-4c5a-bb12-ba0f81ec0c1c\") " pod="calico-system/goldmane-5b85766d88-26kr2" Mar 4 00:52:07.100541 kubelet[3342]: I0304 00:52:07.100326 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/875baab3-ef36-432f-9e30-a2535266fb28-nginx-config\") pod \"whisker-5b4866d8d5-skhsh\" (UID: \"875baab3-ef36-432f-9e30-a2535266fb28\") " pod="calico-system/whisker-5b4866d8d5-skhsh" Mar 4 00:52:07.100541 kubelet[3342]: I0304 00:52:07.100343 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/875baab3-ef36-432f-9e30-a2535266fb28-whisker-backend-key-pair\") pod \"whisker-5b4866d8d5-skhsh\" (UID: \"875baab3-ef36-432f-9e30-a2535266fb28\") " pod="calico-system/whisker-5b4866d8d5-skhsh" Mar 4 00:52:07.100541 kubelet[3342]: I0304 00:52:07.100358 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e098973d-49bb-4990-85c1-ba2e47b94368-config-volume\") pod \"coredns-674b8bbfcf-x76fc\" (UID: \"e098973d-49bb-4990-85c1-ba2e47b94368\") " pod="kube-system/coredns-674b8bbfcf-x76fc" Mar 4 00:52:07.100541 kubelet[3342]: I0304 00:52:07.100376 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/875baab3-ef36-432f-9e30-a2535266fb28-whisker-ca-bundle\") pod \"whisker-5b4866d8d5-skhsh\" (UID: \"875baab3-ef36-432f-9e30-a2535266fb28\") " pod="calico-system/whisker-5b4866d8d5-skhsh" Mar 4 00:52:07.100948 kubelet[3342]: I0304 00:52:07.100688 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lsx9\" (UniqueName: \"kubernetes.io/projected/e098973d-49bb-4990-85c1-ba2e47b94368-kube-api-access-9lsx9\") pod \"coredns-674b8bbfcf-x76fc\" (UID: \"e098973d-49bb-4990-85c1-ba2e47b94368\") " pod="kube-system/coredns-674b8bbfcf-x76fc" Mar 4 00:52:07.100948 kubelet[3342]: I0304 00:52:07.100725 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5eb4339c-7657-4027-8dc5-7b1105a7a5ec-calico-apiserver-certs\") pod \"calico-apiserver-7c45f96d9c-wqfg6\" (UID: \"5eb4339c-7657-4027-8dc5-7b1105a7a5ec\") " pod="calico-system/calico-apiserver-7c45f96d9c-wqfg6" Mar 4 00:52:07.100948 kubelet[3342]: I0304 00:52:07.100743 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee329dba-1352-4c5a-bb12-ba0f81ec0c1c-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-26kr2\" (UID: \"ee329dba-1352-4c5a-bb12-ba0f81ec0c1c\") " pod="calico-system/goldmane-5b85766d88-26kr2" Mar 4 00:52:07.100948 kubelet[3342]: I0304 00:52:07.100762 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2445\" (UniqueName: \"kubernetes.io/projected/e25b2359-8549-4cfb-921c-3e500b348461-kube-api-access-q2445\") pod \"calico-kube-controllers-79dbdd77d5-kb2wx\" (UID: \"e25b2359-8549-4cfb-921c-3e500b348461\") " pod="calico-system/calico-kube-controllers-79dbdd77d5-kb2wx" Mar 4 00:52:07.100948 kubelet[3342]: I0304 00:52:07.100782 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-686r4\" (UniqueName: \"kubernetes.io/projected/875baab3-ef36-432f-9e30-a2535266fb28-kube-api-access-686r4\") pod \"whisker-5b4866d8d5-skhsh\" (UID: \"875baab3-ef36-432f-9e30-a2535266fb28\") " pod="calico-system/whisker-5b4866d8d5-skhsh" Mar 4 00:52:07.101090 kubelet[3342]: I0304 00:52:07.100823 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e25b2359-8549-4cfb-921c-3e500b348461-tigera-ca-bundle\") pod \"calico-kube-controllers-79dbdd77d5-kb2wx\" (UID: \"e25b2359-8549-4cfb-921c-3e500b348461\") " pod="calico-system/calico-kube-controllers-79dbdd77d5-kb2wx" Mar 4 00:52:07.145968 containerd[1831]: time="2026-03-04T00:52:07.145814354Z" level=info msg="CreateContainer within sandbox \"ed509718901211238bdfd93c7294aed18b2b09e611c7e32fca1a9d1602a8dc69\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1812679663475c736e0262a8dd9c6cc77655eefe65b38c30965ccc762d44eb07\"" Mar 4 00:52:07.147218 containerd[1831]: time="2026-03-04T00:52:07.146942393Z" level=info msg="StartContainer for \"1812679663475c736e0262a8dd9c6cc77655eefe65b38c30965ccc762d44eb07\"" Mar 4 00:52:07.224345 containerd[1831]: time="2026-03-04T00:52:07.221879088Z" level=info msg="StartContainer for \"1812679663475c736e0262a8dd9c6cc77655eefe65b38c30965ccc762d44eb07\" returns successfully" Mar 4 00:52:07.300599 containerd[1831]: time="2026-03-04T00:52:07.300553539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vh8pq,Uid:17adcf2c-ca3d-47de-a484-e9ed33024310,Namespace:kube-system,Attempt:0,}" Mar 4 00:52:07.305922 containerd[1831]: time="2026-03-04T00:52:07.305703175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c45f96d9c-8mm4l,Uid:faab35c8-45b0-4065-83ed-922bb968ad92,Namespace:calico-system,Attempt:0,}" Mar 4 00:52:07.308598 containerd[1831]: time="2026-03-04T00:52:07.308554412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79dbdd77d5-kb2wx,Uid:e25b2359-8549-4cfb-921c-3e500b348461,Namespace:calico-system,Attempt:0,}" Mar 4 00:52:07.322062 containerd[1831]: time="2026-03-04T00:52:07.321792201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-26kr2,Uid:ee329dba-1352-4c5a-bb12-ba0f81ec0c1c,Namespace:calico-system,Attempt:0,}" Mar 4 00:52:07.324624 containerd[1831]: time="2026-03-04T00:52:07.323830359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c45f96d9c-wqfg6,Uid:5eb4339c-7657-4027-8dc5-7b1105a7a5ec,Namespace:calico-system,Attempt:0,}" Mar 4 00:52:07.326137 containerd[1831]: time="2026-03-04T00:52:07.326109397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x76fc,Uid:e098973d-49bb-4990-85c1-ba2e47b94368,Namespace:kube-system,Attempt:0,}" Mar 4 00:52:07.333324 containerd[1831]: time="2026-03-04T00:52:07.333205791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b4866d8d5-skhsh,Uid:875baab3-ef36-432f-9e30-a2535266fb28,Namespace:calico-system,Attempt:0,}" Mar 4 00:52:07.974487 systemd-networkd[1371]: cali35e9d0db362: Link UP Mar 4 00:52:07.979865 systemd-networkd[1371]: cali35e9d0db362: Gained carrier Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.582 [ERROR][4225] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.632 [INFO][4225] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0 calico-kube-controllers-79dbdd77d5- calico-system e25b2359-8549-4cfb-921c-3e500b348461 869 0 2026-03-04 00:51:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79dbdd77d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-4860195aa5 calico-kube-controllers-79dbdd77d5-kb2wx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali35e9d0db362 [] [] }} ContainerID="1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" Namespace="calico-system" Pod="calico-kube-controllers-79dbdd77d5-kb2wx" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.632 [INFO][4225] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" Namespace="calico-system" Pod="calico-kube-controllers-79dbdd77d5-kb2wx" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.782 [INFO][4304] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" HandleID="k8s-pod-network.1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" Workload="ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.800 [INFO][4304] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" HandleID="k8s-pod-network.1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" Workload="ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cb700), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4860195aa5", "pod":"calico-kube-controllers-79dbdd77d5-kb2wx", "timestamp":"2026-03-04 00:52:07.782014921 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4860195aa5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400035a9a0)} Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.800 [INFO][4304] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.800 [INFO][4304] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.801 [INFO][4304] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4860195aa5' Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.803 [INFO][4304] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.810 [INFO][4304] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.817 [INFO][4304] ipam/ipam.go 526: Trying affinity for 192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.819 [INFO][4304] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.821 [INFO][4304] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.821 [INFO][4304] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.829 [INFO][4304] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132 Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.836 [INFO][4304] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.848 [INFO][4304] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.129/26] block=192.168.47.128/26 handle="k8s-pod-network.1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.848 [INFO][4304] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.129/26] handle="k8s-pod-network.1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.848 [INFO][4304] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:08.027021 containerd[1831]: 2026-03-04 00:52:07.848 [INFO][4304] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.129/26] IPv6=[] ContainerID="1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" HandleID="k8s-pod-network.1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" Workload="ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0" Mar 4 00:52:08.030091 containerd[1831]: 2026-03-04 00:52:07.858 [INFO][4225] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" Namespace="calico-system" Pod="calico-kube-controllers-79dbdd77d5-kb2wx" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0", GenerateName:"calico-kube-controllers-79dbdd77d5-", Namespace:"calico-system", SelfLink:"", UID:"e25b2359-8549-4cfb-921c-3e500b348461", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79dbdd77d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"", Pod:"calico-kube-controllers-79dbdd77d5-kb2wx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali35e9d0db362", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.030091 containerd[1831]: 2026-03-04 00:52:07.858 [INFO][4225] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.129/32] ContainerID="1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" Namespace="calico-system" Pod="calico-kube-controllers-79dbdd77d5-kb2wx" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0" Mar 4 00:52:08.030091 containerd[1831]: 2026-03-04 00:52:07.858 [INFO][4225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35e9d0db362 ContainerID="1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" Namespace="calico-system" Pod="calico-kube-controllers-79dbdd77d5-kb2wx" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0" Mar 4 00:52:08.030091 containerd[1831]: 2026-03-04 00:52:07.986 [INFO][4225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" Namespace="calico-system" Pod="calico-kube-controllers-79dbdd77d5-kb2wx" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0" Mar 4 00:52:08.030091 containerd[1831]: 2026-03-04 00:52:07.986 [INFO][4225] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" Namespace="calico-system" Pod="calico-kube-controllers-79dbdd77d5-kb2wx" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0", GenerateName:"calico-kube-controllers-79dbdd77d5-", Namespace:"calico-system", SelfLink:"", UID:"e25b2359-8549-4cfb-921c-3e500b348461", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79dbdd77d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132", Pod:"calico-kube-controllers-79dbdd77d5-kb2wx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali35e9d0db362", MAC:"62:42:06:cb:61:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.030091 containerd[1831]: 2026-03-04 00:52:08.018 [INFO][4225] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132" Namespace="calico-system" Pod="calico-kube-controllers-79dbdd77d5-kb2wx" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--kube--controllers--79dbdd77d5--kb2wx-eth0" Mar 4 00:52:08.048445 systemd-networkd[1371]: cali696d454b481: Link UP Mar 4 00:52:08.048632 systemd-networkd[1371]: cali696d454b481: Gained carrier Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.541 [ERROR][4234] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.592 [INFO][4234] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0 calico-apiserver-7c45f96d9c- calico-system 5eb4339c-7657-4027-8dc5-7b1105a7a5ec 877 0 2026-03-04 00:51:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c45f96d9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-4860195aa5 calico-apiserver-7c45f96d9c-wqfg6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali696d454b481 [] [] }} ContainerID="b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-wqfg6" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.592 [INFO][4234] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-wqfg6" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.781 [INFO][4296] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" HandleID="k8s-pod-network.b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" Workload="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.812 [INFO][4296] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" HandleID="k8s-pod-network.b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" Workload="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000313910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4860195aa5", "pod":"calico-apiserver-7c45f96d9c-wqfg6", "timestamp":"2026-03-04 00:52:07.781895401 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4860195aa5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003f71e0)} Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.812 [INFO][4296] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.850 [INFO][4296] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.850 [INFO][4296] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4860195aa5' Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.903 [INFO][4296] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.911 [INFO][4296] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.949 [INFO][4296] ipam/ipam.go 526: Trying affinity for 192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.958 [INFO][4296] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.976 [INFO][4296] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.976 [INFO][4296] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:07.989 [INFO][4296] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:08.004 [INFO][4296] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:08.026 [INFO][4296] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.130/26] block=192.168.47.128/26 handle="k8s-pod-network.b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:08.026 [INFO][4296] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.130/26] handle="k8s-pod-network.b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:08.028 [INFO][4296] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:08.086197 containerd[1831]: 2026-03-04 00:52:08.028 [INFO][4296] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.130/26] IPv6=[] ContainerID="b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" HandleID="k8s-pod-network.b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" Workload="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0" Mar 4 00:52:08.087547 containerd[1831]: 2026-03-04 00:52:08.042 [INFO][4234] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-wqfg6" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0", GenerateName:"calico-apiserver-7c45f96d9c-", Namespace:"calico-system", SelfLink:"", UID:"5eb4339c-7657-4027-8dc5-7b1105a7a5ec", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c45f96d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"", Pod:"calico-apiserver-7c45f96d9c-wqfg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali696d454b481", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.087547 containerd[1831]: 2026-03-04 00:52:08.042 [INFO][4234] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.130/32] ContainerID="b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-wqfg6" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0" Mar 4 00:52:08.087547 containerd[1831]: 2026-03-04 00:52:08.042 [INFO][4234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali696d454b481 ContainerID="b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-wqfg6" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0" Mar 4 00:52:08.087547 containerd[1831]: 2026-03-04 00:52:08.049 [INFO][4234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-wqfg6" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0" Mar 4 00:52:08.087547 containerd[1831]: 2026-03-04 00:52:08.049 [INFO][4234] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-wqfg6" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0", GenerateName:"calico-apiserver-7c45f96d9c-", Namespace:"calico-system", SelfLink:"", UID:"5eb4339c-7657-4027-8dc5-7b1105a7a5ec", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c45f96d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a", Pod:"calico-apiserver-7c45f96d9c-wqfg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali696d454b481", MAC:"1a:cf:3f:c9:99:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.087547 containerd[1831]: 2026-03-04 00:52:08.075 [INFO][4234] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-wqfg6" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--wqfg6-eth0" Mar 4 00:52:08.109145 containerd[1831]: time="2026-03-04T00:52:08.108737076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:52:08.109145 containerd[1831]: time="2026-03-04T00:52:08.108794916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:52:08.109145 containerd[1831]: time="2026-03-04T00:52:08.108859876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.109434 containerd[1831]: time="2026-03-04T00:52:08.109284516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.133478 containerd[1831]: time="2026-03-04T00:52:08.133139495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:52:08.133478 containerd[1831]: time="2026-03-04T00:52:08.133195215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:52:08.133478 containerd[1831]: time="2026-03-04T00:52:08.133210695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.133478 containerd[1831]: time="2026-03-04T00:52:08.133302335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.140701 systemd-networkd[1371]: caliad06a13167a: Link UP Mar 4 00:52:08.145907 systemd-networkd[1371]: caliad06a13167a: Gained carrier Mar 4 00:52:08.186447 kubelet[3342]: I0304 00:52:08.184657 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l4hvg" podStartSLOduration=4.102969997 podStartE2EDuration="19.18463509s" podCreationTimestamp="2026-03-04 00:51:49 +0000 UTC" firstStartedPulling="2026-03-04 00:51:49.447906595 +0000 UTC m=+22.716190069" lastFinishedPulling="2026-03-04 00:52:04.529571728 +0000 UTC m=+37.797855162" observedRunningTime="2026-03-04 00:52:08.152225399 +0000 UTC m=+41.420508913" watchObservedRunningTime="2026-03-04 00:52:08.18463509 +0000 UTC m=+41.452918564" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:07.549 [ERROR][4221] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:07.598 [INFO][4221] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0 coredns-674b8bbfcf- kube-system 17adcf2c-ca3d-47de-a484-e9ed33024310 865 0 2026-03-04 00:51:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-4860195aa5 coredns-674b8bbfcf-vh8pq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad06a13167a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-vh8pq" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:07.598 [INFO][4221] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-vh8pq" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:07.815 [INFO][4295] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" HandleID="k8s-pod-network.c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" Workload="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:07.836 [INFO][4295] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" HandleID="k8s-pod-network.c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" Workload="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004ca0b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-4860195aa5", "pod":"coredns-674b8bbfcf-vh8pq", "timestamp":"2026-03-04 00:52:07.815020412 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4860195aa5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000d8840)} Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:07.836 [INFO][4295] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.026 [INFO][4295] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.027 [INFO][4295] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4860195aa5' Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.040 [INFO][4295] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.056 [INFO][4295] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.077 [INFO][4295] ipam/ipam.go 526: Trying affinity for 192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.080 [INFO][4295] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.087 [INFO][4295] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.088 [INFO][4295] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.089 [INFO][4295] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05 Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.096 [INFO][4295] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.122 [INFO][4295] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.131/26] block=192.168.47.128/26 handle="k8s-pod-network.c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.122 [INFO][4295] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.131/26] handle="k8s-pod-network.c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.122 [INFO][4295] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:08.210749 containerd[1831]: 2026-03-04 00:52:08.122 [INFO][4295] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.131/26] IPv6=[] ContainerID="c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" HandleID="k8s-pod-network.c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" Workload="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0" Mar 4 00:52:08.211514 containerd[1831]: 2026-03-04 00:52:08.133 [INFO][4221] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-vh8pq" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"17adcf2c-ca3d-47de-a484-e9ed33024310", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"", Pod:"coredns-674b8bbfcf-vh8pq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad06a13167a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.211514 containerd[1831]: 2026-03-04 00:52:08.133 [INFO][4221] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.131/32] ContainerID="c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-vh8pq" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0" Mar 4 00:52:08.211514 containerd[1831]: 2026-03-04 00:52:08.133 [INFO][4221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad06a13167a ContainerID="c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-vh8pq" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0" Mar 4 00:52:08.211514 containerd[1831]: 2026-03-04 00:52:08.149 [INFO][4221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-vh8pq" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0" Mar 4 00:52:08.211514 containerd[1831]: 2026-03-04 00:52:08.154 [INFO][4221] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-vh8pq" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"17adcf2c-ca3d-47de-a484-e9ed33024310", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05", Pod:"coredns-674b8bbfcf-vh8pq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad06a13167a", MAC:"ee:c5:c5:5a:3a:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.211514 containerd[1831]: 2026-03-04 00:52:08.173 [INFO][4221] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05" Namespace="kube-system" Pod="coredns-674b8bbfcf-vh8pq" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--vh8pq-eth0" Mar 4 00:52:08.253748 containerd[1831]: time="2026-03-04T00:52:08.253400871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:52:08.253748 containerd[1831]: time="2026-03-04T00:52:08.253450711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:52:08.253748 containerd[1831]: time="2026-03-04T00:52:08.253461071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.253748 containerd[1831]: time="2026-03-04T00:52:08.253611550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.277010 systemd-networkd[1371]: cali1b2f52e82f9: Link UP Mar 4 00:52:08.277170 systemd-networkd[1371]: cali1b2f52e82f9: Gained carrier Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:07.695 [ERROR][4267] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:07.759 [INFO][4267] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0 goldmane-5b85766d88- calico-system ee329dba-1352-4c5a-bb12-ba0f81ec0c1c 872 0 2026-03-04 00:51:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-4860195aa5 goldmane-5b85766d88-26kr2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1b2f52e82f9 [] [] }} ContainerID="7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" Namespace="calico-system" Pod="goldmane-5b85766d88-26kr2" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:07.761 [INFO][4267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" Namespace="calico-system" Pod="goldmane-5b85766d88-26kr2" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:07.866 [INFO][4335] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" HandleID="k8s-pod-network.7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" Workload="ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:07.889 [INFO][4335] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" HandleID="k8s-pod-network.7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" Workload="ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400068d230), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4860195aa5", "pod":"goldmane-5b85766d88-26kr2", "timestamp":"2026-03-04 00:52:07.866030127 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4860195aa5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000460000)} Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:07.889 [INFO][4335] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.123 [INFO][4335] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.123 [INFO][4335] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4860195aa5' Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.132 [INFO][4335] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.165 [INFO][4335] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.223 [INFO][4335] ipam/ipam.go 526: Trying affinity for 192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.226 [INFO][4335] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.229 [INFO][4335] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.229 [INFO][4335] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.231 [INFO][4335] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4 Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.248 [INFO][4335] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.256 [INFO][4335] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.132/26] block=192.168.47.128/26 handle="k8s-pod-network.7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.257 [INFO][4335] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.132/26] handle="k8s-pod-network.7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.257 [INFO][4335] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:08.309418 containerd[1831]: 2026-03-04 00:52:08.257 [INFO][4335] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.132/26] IPv6=[] ContainerID="7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" HandleID="k8s-pod-network.7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" Workload="ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0" Mar 4 00:52:08.310020 containerd[1831]: 2026-03-04 00:52:08.269 [INFO][4267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" Namespace="calico-system" Pod="goldmane-5b85766d88-26kr2" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ee329dba-1352-4c5a-bb12-ba0f81ec0c1c", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"", Pod:"goldmane-5b85766d88-26kr2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1b2f52e82f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.310020 containerd[1831]: 2026-03-04 00:52:08.270 [INFO][4267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.132/32] ContainerID="7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" Namespace="calico-system" Pod="goldmane-5b85766d88-26kr2" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0" Mar 4 00:52:08.310020 containerd[1831]: 2026-03-04 00:52:08.270 [INFO][4267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b2f52e82f9 ContainerID="7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" Namespace="calico-system" Pod="goldmane-5b85766d88-26kr2" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0" Mar 4 00:52:08.310020 containerd[1831]: 2026-03-04 00:52:08.276 [INFO][4267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" Namespace="calico-system" Pod="goldmane-5b85766d88-26kr2" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0" Mar 4 00:52:08.310020 containerd[1831]: 2026-03-04 00:52:08.279 [INFO][4267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" Namespace="calico-system" Pod="goldmane-5b85766d88-26kr2" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ee329dba-1352-4c5a-bb12-ba0f81ec0c1c", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4", Pod:"goldmane-5b85766d88-26kr2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1b2f52e82f9", MAC:"ce:e2:7b:f0:03:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.310020 containerd[1831]: 2026-03-04 00:52:08.303 [INFO][4267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4" Namespace="calico-system" Pod="goldmane-5b85766d88-26kr2" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-goldmane--5b85766d88--26kr2-eth0" Mar 4 00:52:08.311696 containerd[1831]: time="2026-03-04T00:52:08.310773061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c45f96d9c-wqfg6,Uid:5eb4339c-7657-4027-8dc5-7b1105a7a5ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a\"" Mar 4 00:52:08.315419 containerd[1831]: time="2026-03-04T00:52:08.315389457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 00:52:08.320409 containerd[1831]: time="2026-03-04T00:52:08.320214533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79dbdd77d5-kb2wx,Uid:e25b2359-8549-4cfb-921c-3e500b348461,Namespace:calico-system,Attempt:0,} returns sandbox id \"1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132\"" Mar 4 00:52:08.343899 containerd[1831]: time="2026-03-04T00:52:08.343808152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:52:08.344257 containerd[1831]: time="2026-03-04T00:52:08.343869032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:52:08.344257 containerd[1831]: time="2026-03-04T00:52:08.343896272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.344257 containerd[1831]: time="2026-03-04T00:52:08.343982032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.371558 systemd-networkd[1371]: calied58cc79ba5: Link UP Mar 4 00:52:08.372619 systemd-networkd[1371]: calied58cc79ba5: Gained carrier Mar 4 00:52:08.394444 containerd[1831]: time="2026-03-04T00:52:08.393997988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vh8pq,Uid:17adcf2c-ca3d-47de-a484-e9ed33024310,Namespace:kube-system,Attempt:0,} returns sandbox id \"c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05\"" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:07.701 [ERROR][4245] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:07.745 [INFO][4245] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0 calico-apiserver-7c45f96d9c- calico-system faab35c8-45b0-4065-83ed-922bb968ad92 868 0 2026-03-04 00:51:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c45f96d9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-4860195aa5 calico-apiserver-7c45f96d9c-8mm4l eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calied58cc79ba5 [] [] }} ContainerID="026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-8mm4l" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:07.745 [INFO][4245] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-8mm4l" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:07.874 [INFO][4332] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" HandleID="k8s-pod-network.026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" Workload="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:07.890 [INFO][4332] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" HandleID="k8s-pod-network.026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" Workload="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e5dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4860195aa5", "pod":"calico-apiserver-7c45f96d9c-8mm4l", "timestamp":"2026-03-04 00:52:07.87452884 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4860195aa5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400027e420)} Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:07.891 [INFO][4332] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.257 [INFO][4332] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.257 [INFO][4332] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4860195aa5' Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.261 [INFO][4332] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.279 [INFO][4332] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.296 [INFO][4332] ipam/ipam.go 526: Trying affinity for 192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.306 [INFO][4332] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.317 [INFO][4332] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.317 [INFO][4332] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.322 [INFO][4332] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57 Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.338 [INFO][4332] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.355 [INFO][4332] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.133/26] block=192.168.47.128/26 handle="k8s-pod-network.026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.355 [INFO][4332] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.133/26] handle="k8s-pod-network.026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.356 [INFO][4332] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:08.402890 containerd[1831]: 2026-03-04 00:52:08.356 [INFO][4332] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.133/26] IPv6=[] ContainerID="026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" HandleID="k8s-pod-network.026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" Workload="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0" Mar 4 00:52:08.403561 containerd[1831]: 2026-03-04 00:52:08.365 [INFO][4245] cni-plugin/k8s.go 418: Populated endpoint ContainerID="026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-8mm4l" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0", GenerateName:"calico-apiserver-7c45f96d9c-", Namespace:"calico-system", SelfLink:"", UID:"faab35c8-45b0-4065-83ed-922bb968ad92", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c45f96d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"", Pod:"calico-apiserver-7c45f96d9c-8mm4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calied58cc79ba5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.403561 containerd[1831]: 2026-03-04 00:52:08.366 [INFO][4245] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.133/32] ContainerID="026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-8mm4l" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0" Mar 4 00:52:08.403561 containerd[1831]: 2026-03-04 00:52:08.366 [INFO][4245] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied58cc79ba5 ContainerID="026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-8mm4l" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0" Mar 4 00:52:08.403561 containerd[1831]: 2026-03-04 00:52:08.376 [INFO][4245] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-8mm4l" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0" Mar 4 00:52:08.403561 containerd[1831]: 2026-03-04 00:52:08.380 [INFO][4245] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-8mm4l" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0", GenerateName:"calico-apiserver-7c45f96d9c-", Namespace:"calico-system", SelfLink:"", UID:"faab35c8-45b0-4065-83ed-922bb968ad92", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c45f96d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57", Pod:"calico-apiserver-7c45f96d9c-8mm4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calied58cc79ba5", MAC:"4a:46:ef:4d:11:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.403561 containerd[1831]: 2026-03-04 00:52:08.397 [INFO][4245] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57" Namespace="calico-system" Pod="calico-apiserver-7c45f96d9c-8mm4l" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-calico--apiserver--7c45f96d9c--8mm4l-eth0" Mar 4 00:52:08.424434 containerd[1831]: time="2026-03-04T00:52:08.424387922Z" level=info msg="CreateContainer within sandbox \"c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 00:52:08.438907 containerd[1831]: time="2026-03-04T00:52:08.438864949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-26kr2,Uid:ee329dba-1352-4c5a-bb12-ba0f81ec0c1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4\"" Mar 4 00:52:08.448340 systemd-networkd[1371]: cali20f476f75e8: Link UP Mar 4 00:52:08.449451 systemd-networkd[1371]: cali20f476f75e8: Gained carrier Mar 4 00:52:08.458581 containerd[1831]: time="2026-03-04T00:52:08.457742013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:52:08.458581 containerd[1831]: time="2026-03-04T00:52:08.457804573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:52:08.458581 containerd[1831]: time="2026-03-04T00:52:08.457815333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.458581 containerd[1831]: time="2026-03-04T00:52:08.457991453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:07.697 [ERROR][4259] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:07.735 [INFO][4259] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0 coredns-674b8bbfcf- kube-system e098973d-49bb-4990-85c1-ba2e47b94368 871 0 2026-03-04 00:51:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-4860195aa5 coredns-674b8bbfcf-x76fc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali20f476f75e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-x76fc" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:07.735 [INFO][4259] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-x76fc" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:07.861 [INFO][4333] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" HandleID="k8s-pod-network.a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" Workload="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:07.891 [INFO][4333] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" HandleID="k8s-pod-network.a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" Workload="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ee380), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-4860195aa5", "pod":"coredns-674b8bbfcf-x76fc", "timestamp":"2026-03-04 00:52:07.861445171 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4860195aa5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000468000)} Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:07.891 [INFO][4333] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.355 [INFO][4333] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.356 [INFO][4333] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4860195aa5' Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.360 [INFO][4333] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.373 [INFO][4333] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.392 [INFO][4333] ipam/ipam.go 526: Trying affinity for 192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.399 [INFO][4333] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.404 [INFO][4333] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.404 [INFO][4333] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.405 [INFO][4333] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1 Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.418 [INFO][4333] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.436 [INFO][4333] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.134/26] block=192.168.47.128/26 handle="k8s-pod-network.a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.437 [INFO][4333] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.134/26] handle="k8s-pod-network.a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.437 [INFO][4333] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:08.480555 containerd[1831]: 2026-03-04 00:52:08.437 [INFO][4333] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.134/26] IPv6=[] ContainerID="a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" HandleID="k8s-pod-network.a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" Workload="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0" Mar 4 00:52:08.481320 containerd[1831]: 2026-03-04 00:52:08.442 [INFO][4259] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-x76fc" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e098973d-49bb-4990-85c1-ba2e47b94368", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"", Pod:"coredns-674b8bbfcf-x76fc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20f476f75e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.481320 containerd[1831]: 2026-03-04 00:52:08.443 [INFO][4259] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.134/32] ContainerID="a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-x76fc" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0" Mar 4 00:52:08.481320 containerd[1831]: 2026-03-04 00:52:08.443 [INFO][4259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20f476f75e8 ContainerID="a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-x76fc" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0" Mar 4 00:52:08.481320 containerd[1831]: 2026-03-04 00:52:08.449 [INFO][4259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-x76fc" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0" Mar 4 00:52:08.481320 containerd[1831]: 2026-03-04 00:52:08.451 [INFO][4259] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-x76fc" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e098973d-49bb-4990-85c1-ba2e47b94368", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1", Pod:"coredns-674b8bbfcf-x76fc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20f476f75e8", MAC:"5a:21:fc:33:78:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.481320 containerd[1831]: 2026-03-04 00:52:08.475 [INFO][4259] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-x76fc" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-coredns--674b8bbfcf--x76fc-eth0" Mar 4 00:52:08.494345 containerd[1831]: time="2026-03-04T00:52:08.493840222Z" level=info msg="CreateContainer within sandbox \"c434223fd01a887a79f61d929e26bde1c30d140d292c52f8ef424a27c3454b05\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b526756c504afb680f1436ee89d869750ccdb318233e483db13120292d3e40c\"" Mar 4 00:52:08.496076 containerd[1831]: time="2026-03-04T00:52:08.495946060Z" level=info msg="StartContainer for \"3b526756c504afb680f1436ee89d869750ccdb318233e483db13120292d3e40c\"" Mar 4 00:52:08.535819 containerd[1831]: time="2026-03-04T00:52:08.535286706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:52:08.535819 containerd[1831]: time="2026-03-04T00:52:08.535398906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:52:08.535819 containerd[1831]: time="2026-03-04T00:52:08.535415545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.535819 containerd[1831]: time="2026-03-04T00:52:08.535505985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.543139 containerd[1831]: time="2026-03-04T00:52:08.542959459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c45f96d9c-8mm4l,Uid:faab35c8-45b0-4065-83ed-922bb968ad92,Namespace:calico-system,Attempt:0,} returns sandbox id \"026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57\"" Mar 4 00:52:08.554411 systemd-networkd[1371]: calibd8567c0071: Link UP Mar 4 00:52:08.558118 systemd-networkd[1371]: calibd8567c0071: Gained carrier Mar 4 00:52:08.597658 containerd[1831]: time="2026-03-04T00:52:08.597614811Z" level=info msg="StartContainer for \"3b526756c504afb680f1436ee89d869750ccdb318233e483db13120292d3e40c\" returns successfully" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:07.705 [ERROR][4277] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:07.762 [INFO][4277] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0 whisker-5b4866d8d5- calico-system 875baab3-ef36-432f-9e30-a2535266fb28 885 0 2026-03-04 00:51:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b4866d8d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-4860195aa5 whisker-5b4866d8d5-skhsh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibd8567c0071 [] [] }} ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Namespace="calico-system" Pod="whisker-5b4866d8d5-skhsh" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:07.762 [INFO][4277] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Namespace="calico-system" Pod="whisker-5b4866d8d5-skhsh" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:07.879 [INFO][4348] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:07.898 [INFO][4348] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000429930), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4860195aa5", "pod":"whisker-5b4866d8d5-skhsh", "timestamp":"2026-03-04 00:52:07.879082916 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4860195aa5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001ee420)} Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:07.898 [INFO][4348] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.437 [INFO][4348] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.437 [INFO][4348] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4860195aa5' Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.462 [INFO][4348] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.476 [INFO][4348] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.489 [INFO][4348] ipam/ipam.go 526: Trying affinity for 192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.492 [INFO][4348] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.498 [INFO][4348] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.498 [INFO][4348] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.501 [INFO][4348] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5 Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.512 [INFO][4348] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.529 [INFO][4348] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.135/26] block=192.168.47.128/26 handle="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.529 [INFO][4348] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.135/26] handle="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.529 [INFO][4348] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:08.600066 containerd[1831]: 2026-03-04 00:52:08.529 [INFO][4348] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.135/26] IPv6=[] ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:08.600619 containerd[1831]: 2026-03-04 00:52:08.538 [INFO][4277] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Namespace="calico-system" Pod="whisker-5b4866d8d5-skhsh" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0", GenerateName:"whisker-5b4866d8d5-", Namespace:"calico-system", SelfLink:"", UID:"875baab3-ef36-432f-9e30-a2535266fb28", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b4866d8d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"", Pod:"whisker-5b4866d8d5-skhsh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibd8567c0071", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.600619 containerd[1831]: 2026-03-04 00:52:08.542 [INFO][4277] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.135/32] ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Namespace="calico-system" Pod="whisker-5b4866d8d5-skhsh" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:08.600619 containerd[1831]: 2026-03-04 00:52:08.543 [INFO][4277] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd8567c0071 ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Namespace="calico-system" Pod="whisker-5b4866d8d5-skhsh" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:08.600619 containerd[1831]: 2026-03-04 00:52:08.557 [INFO][4277] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Namespace="calico-system" Pod="whisker-5b4866d8d5-skhsh" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:08.600619 containerd[1831]: 2026-03-04 00:52:08.560 [INFO][4277] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Namespace="calico-system" Pod="whisker-5b4866d8d5-skhsh" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0", GenerateName:"whisker-5b4866d8d5-", Namespace:"calico-system", SelfLink:"", UID:"875baab3-ef36-432f-9e30-a2535266fb28", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b4866d8d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5", Pod:"whisker-5b4866d8d5-skhsh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibd8567c0071", MAC:"0e:25:ee:96:9b:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:08.600619 containerd[1831]: 2026-03-04 00:52:08.593 [INFO][4277] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Namespace="calico-system" Pod="whisker-5b4866d8d5-skhsh" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:08.646339 containerd[1831]: time="2026-03-04T00:52:08.645136610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:52:08.646339 containerd[1831]: time="2026-03-04T00:52:08.645193530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:52:08.646339 containerd[1831]: time="2026-03-04T00:52:08.645209050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.646339 containerd[1831]: time="2026-03-04T00:52:08.645288570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:08.648158 containerd[1831]: time="2026-03-04T00:52:08.648127487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x76fc,Uid:e098973d-49bb-4990-85c1-ba2e47b94368,Namespace:kube-system,Attempt:0,} returns sandbox id \"a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1\"" Mar 4 00:52:08.657376 containerd[1831]: time="2026-03-04T00:52:08.657337919Z" level=info msg="CreateContainer within sandbox \"a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 00:52:08.690989 containerd[1831]: time="2026-03-04T00:52:08.690844930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b4866d8d5-skhsh,Uid:875baab3-ef36-432f-9e30-a2535266fb28,Namespace:calico-system,Attempt:0,} returns sandbox id \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\"" Mar 4 00:52:08.699133 containerd[1831]: time="2026-03-04T00:52:08.699084923Z" level=info msg="CreateContainer within sandbox \"a832cc1ada9467bc57fd39710f0be715a616f63f12ee7c064548e13cffed26b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"af4c3d779f13c9f5acff75f8c084bfad0d2d3dfec9fb95a691f474e14d1cedbb\"" Mar 4 00:52:08.699681 containerd[1831]: time="2026-03-04T00:52:08.699654403Z" level=info msg="StartContainer for \"af4c3d779f13c9f5acff75f8c084bfad0d2d3dfec9fb95a691f474e14d1cedbb\"" Mar 4 00:52:08.799622 containerd[1831]: time="2026-03-04T00:52:08.799394196Z" level=info msg="StartContainer for \"af4c3d779f13c9f5acff75f8c084bfad0d2d3dfec9fb95a691f474e14d1cedbb\" returns successfully" Mar 4 00:52:08.859795 containerd[1831]: time="2026-03-04T00:52:08.859615904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b4l6w,Uid:f23c515e-4b9c-4719-aa7e-6cc7d093c864,Namespace:calico-system,Attempt:0,}" Mar 4 00:52:09.098509 systemd-networkd[1371]: cali696d454b481: Gained IPv6LL Mar 4 00:52:09.174593 kubelet[3342]: I0304 00:52:09.174511 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x76fc" podStartSLOduration=34.17441579 podStartE2EDuration="34.17441579s" podCreationTimestamp="2026-03-04 00:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:52:09.173475671 +0000 UTC m=+42.441759185" watchObservedRunningTime="2026-03-04 00:52:09.17441579 +0000 UTC m=+42.442699264" Mar 4 00:52:09.245814 kubelet[3342]: I0304 00:52:09.245754 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vh8pq" podStartSLOduration=34.245735568 podStartE2EDuration="34.245735568s" podCreationTimestamp="2026-03-04 00:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:52:09.203460165 +0000 UTC m=+42.471743639" watchObservedRunningTime="2026-03-04 00:52:09.245735568 +0000 UTC m=+42.514019042" Mar 4 00:52:09.299406 systemd-networkd[1371]: calic73a95d9ea8: Link UP Mar 4 00:52:09.300657 systemd-networkd[1371]: calic73a95d9ea8: Gained carrier Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.058 [ERROR][4876] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.080 [INFO][4876] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0 csi-node-driver- calico-system f23c515e-4b9c-4719-aa7e-6cc7d093c864 735 0 2026-03-04 00:51:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-4860195aa5 csi-node-driver-b4l6w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic73a95d9ea8 [] [] }} ContainerID="4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" Namespace="calico-system" Pod="csi-node-driver-b4l6w" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.082 [INFO][4876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" Namespace="calico-system" Pod="csi-node-driver-b4l6w" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.132 [INFO][4890] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" HandleID="k8s-pod-network.4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" Workload="ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.151 [INFO][4890] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" HandleID="k8s-pod-network.4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" Workload="ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4860195aa5", "pod":"csi-node-driver-b4l6w", "timestamp":"2026-03-04 00:52:09.132068827 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4860195aa5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001851e0)} Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.151 [INFO][4890] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.151 [INFO][4890] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.151 [INFO][4890] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4860195aa5' Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.155 [INFO][4890] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.198 [INFO][4890] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.227 [INFO][4890] ipam/ipam.go 526: Trying affinity for 192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.234 [INFO][4890] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.242 [INFO][4890] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.242 [INFO][4890] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.248 [INFO][4890] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.260 [INFO][4890] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.284 [INFO][4890] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.136/26] block=192.168.47.128/26 handle="k8s-pod-network.4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.284 [INFO][4890] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.136/26] handle="k8s-pod-network.4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.284 [INFO][4890] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:09.331875 containerd[1831]: 2026-03-04 00:52:09.284 [INFO][4890] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.136/26] IPv6=[] ContainerID="4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" HandleID="k8s-pod-network.4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" Workload="ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0" Mar 4 00:52:09.333923 containerd[1831]: 2026-03-04 00:52:09.292 [INFO][4876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" Namespace="calico-system" Pod="csi-node-driver-b4l6w" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f23c515e-4b9c-4719-aa7e-6cc7d093c864", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"", Pod:"csi-node-driver-b4l6w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic73a95d9ea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:09.333923 containerd[1831]: 2026-03-04 00:52:09.293 [INFO][4876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.136/32] ContainerID="4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" Namespace="calico-system" Pod="csi-node-driver-b4l6w" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0" Mar 4 00:52:09.333923 containerd[1831]: 2026-03-04 00:52:09.293 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic73a95d9ea8 ContainerID="4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" Namespace="calico-system" Pod="csi-node-driver-b4l6w" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0" Mar 4 00:52:09.333923 containerd[1831]: 2026-03-04 00:52:09.304 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" Namespace="calico-system" Pod="csi-node-driver-b4l6w" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0" Mar 4 00:52:09.333923 containerd[1831]: 2026-03-04 00:52:09.309 [INFO][4876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" Namespace="calico-system" Pod="csi-node-driver-b4l6w" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f23c515e-4b9c-4719-aa7e-6cc7d093c864", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea", Pod:"csi-node-driver-b4l6w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic73a95d9ea8", MAC:"82:1a:53:da:f0:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:09.333923 containerd[1831]: 2026-03-04 00:52:09.326 [INFO][4876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea" Namespace="calico-system" Pod="csi-node-driver-b4l6w" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-csi--node--driver--b4l6w-eth0" Mar 4 00:52:09.349356 kernel: calico-node[4779]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 4 00:52:09.405330 containerd[1831]: time="2026-03-04T00:52:09.403533591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:52:09.405330 containerd[1831]: time="2026-03-04T00:52:09.403669151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:52:09.405330 containerd[1831]: time="2026-03-04T00:52:09.403684391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:09.405330 containerd[1831]: time="2026-03-04T00:52:09.403788350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:09.475358 containerd[1831]: time="2026-03-04T00:52:09.475241968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b4l6w,Uid:f23c515e-4b9c-4719-aa7e-6cc7d093c864,Namespace:calico-system,Attempt:0,} returns sandbox id \"4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea\"" Mar 4 00:52:09.610380 systemd-networkd[1371]: calibd8567c0071: Gained IPv6LL Mar 4 00:52:09.674546 systemd-networkd[1371]: cali35e9d0db362: Gained IPv6LL Mar 4 00:52:09.801460 systemd-networkd[1371]: cali20f476f75e8: Gained IPv6LL Mar 4 00:52:09.929886 systemd-networkd[1371]: caliad06a13167a: Gained IPv6LL Mar 4 00:52:09.930926 systemd-networkd[1371]: cali1b2f52e82f9: Gained IPv6LL Mar 4 00:52:09.931093 systemd-networkd[1371]: calied58cc79ba5: Gained IPv6LL Mar 4 00:52:09.997191 systemd-networkd[1371]: vxlan.calico: Link UP Mar 4 00:52:09.997201 systemd-networkd[1371]: vxlan.calico: Gained carrier Mar 4 00:52:10.631144 containerd[1831]: time="2026-03-04T00:52:10.631090323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:10.636873 containerd[1831]: time="2026-03-04T00:52:10.636828838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 4 00:52:10.640275 containerd[1831]: time="2026-03-04T00:52:10.640125596Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:10.645714 containerd[1831]: time="2026-03-04T00:52:10.645589831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:10.647112 containerd[1831]: time="2026-03-04T00:52:10.647002310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.330885054s" Mar 4 00:52:10.647112 containerd[1831]: time="2026-03-04T00:52:10.647034670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 4 00:52:10.651741 containerd[1831]: time="2026-03-04T00:52:10.651512186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 4 00:52:10.668519 containerd[1831]: time="2026-03-04T00:52:10.668478251Z" level=info msg="CreateContainer within sandbox \"b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 00:52:10.702280 containerd[1831]: time="2026-03-04T00:52:10.702241702Z" level=info msg="CreateContainer within sandbox \"b03e21f796b1d95684bb2ea048dc901c8526e3c58ffa5068c314d8efc0ee2f0a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"612ced198133784e06a25807bd27669fd38421ce1c4a472d97a6fcf00626053e\"" Mar 4 00:52:10.703790 containerd[1831]: time="2026-03-04T00:52:10.703758500Z" level=info msg="StartContainer for \"612ced198133784e06a25807bd27669fd38421ce1c4a472d97a6fcf00626053e\"" Mar 4 00:52:10.764224 containerd[1831]: time="2026-03-04T00:52:10.764102288Z" level=info msg="StartContainer for \"612ced198133784e06a25807bd27669fd38421ce1c4a472d97a6fcf00626053e\" returns successfully" Mar 4 00:52:11.199896 kubelet[3342]: I0304 00:52:11.199837 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c45f96d9c-wqfg6" podStartSLOduration=22.865067499 podStartE2EDuration="25.199820349s" podCreationTimestamp="2026-03-04 00:51:46 +0000 UTC" firstStartedPulling="2026-03-04 00:52:08.315176937 +0000 UTC m=+41.583460411" lastFinishedPulling="2026-03-04 00:52:10.649929787 +0000 UTC m=+43.918213261" observedRunningTime="2026-03-04 00:52:11.199573629 +0000 UTC m=+44.467857103" watchObservedRunningTime="2026-03-04 00:52:11.199820349 +0000 UTC m=+44.468103823" Mar 4 00:52:11.337417 systemd-networkd[1371]: calic73a95d9ea8: Gained IPv6LL Mar 4 00:52:12.041641 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Mar 4 00:52:12.180052 kubelet[3342]: I0304 00:52:12.179749 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:52:13.230856 containerd[1831]: time="2026-03-04T00:52:13.230800063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:13.234288 containerd[1831]: time="2026-03-04T00:52:13.234124940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 4 00:52:13.237268 containerd[1831]: time="2026-03-04T00:52:13.237239618Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:13.241795 containerd[1831]: time="2026-03-04T00:52:13.241733414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:13.242621 containerd[1831]: time="2026-03-04T00:52:13.242452693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.590907987s" Mar 4 00:52:13.242621 containerd[1831]: time="2026-03-04T00:52:13.242486133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 4 00:52:13.246485 containerd[1831]: time="2026-03-04T00:52:13.245736610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 4 00:52:13.262338 containerd[1831]: time="2026-03-04T00:52:13.262286676Z" level=info msg="CreateContainer within sandbox \"1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 4 00:52:13.301975 containerd[1831]: time="2026-03-04T00:52:13.301822241Z" level=info msg="CreateContainer within sandbox \"1346fd1b4c19268b2c965f0e38d877aa41f939808c69d8ae0d1abd8a62319132\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"03e19c634c5258c2550a717cb4d32168f42b4d24d7dd8a1afb75458bb4739b13\"" Mar 4 00:52:13.303548 containerd[1831]: time="2026-03-04T00:52:13.303508840Z" level=info msg="StartContainer for \"03e19c634c5258c2550a717cb4d32168f42b4d24d7dd8a1afb75458bb4739b13\"" Mar 4 00:52:13.366575 containerd[1831]: time="2026-03-04T00:52:13.366462265Z" level=info msg="StartContainer for \"03e19c634c5258c2550a717cb4d32168f42b4d24d7dd8a1afb75458bb4739b13\" returns successfully" Mar 4 00:52:14.204568 kubelet[3342]: I0304 00:52:14.204508 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79dbdd77d5-kb2wx" podStartSLOduration=20.28598037 podStartE2EDuration="25.204489254s" podCreationTimestamp="2026-03-04 00:51:49 +0000 UTC" firstStartedPulling="2026-03-04 00:52:08.326093367 +0000 UTC m=+41.594376841" lastFinishedPulling="2026-03-04 00:52:13.244602211 +0000 UTC m=+46.512885725" observedRunningTime="2026-03-04 00:52:14.200390977 +0000 UTC m=+47.468674451" watchObservedRunningTime="2026-03-04 00:52:14.204489254 +0000 UTC m=+47.472772728" Mar 4 00:52:15.063358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873669793.mount: Deactivated successfully. Mar 4 00:52:15.405771 containerd[1831]: time="2026-03-04T00:52:15.405714542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:15.409217 containerd[1831]: time="2026-03-04T00:52:15.409020780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 4 00:52:15.412623 containerd[1831]: time="2026-03-04T00:52:15.412443297Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:15.417773 containerd[1831]: time="2026-03-04T00:52:15.417737812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:15.418931 containerd[1831]: time="2026-03-04T00:52:15.418395732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.172630242s" Mar 4 00:52:15.418931 containerd[1831]: time="2026-03-04T00:52:15.418428532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 4 00:52:15.420689 containerd[1831]: time="2026-03-04T00:52:15.420651810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 00:52:15.426862 containerd[1831]: time="2026-03-04T00:52:15.426828285Z" level=info msg="CreateContainer within sandbox \"7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 4 00:52:15.461565 containerd[1831]: time="2026-03-04T00:52:15.459799938Z" level=info msg="CreateContainer within sandbox \"7648b6e8f20423adb0d1947a00abddf5b3f0baf2e5ead481ba2bac4849e50cf4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2a2a1f680a4a8618a7ae598b0e58fc3341e71e5d9273e291176b5765372930e3\"" Mar 4 00:52:15.461565 containerd[1831]: time="2026-03-04T00:52:15.460503777Z" level=info msg="StartContainer for \"2a2a1f680a4a8618a7ae598b0e58fc3341e71e5d9273e291176b5765372930e3\"" Mar 4 00:52:15.524555 systemd[1]: run-containerd-runc-k8s.io-2a2a1f680a4a8618a7ae598b0e58fc3341e71e5d9273e291176b5765372930e3-runc.zRZrJc.mount: Deactivated successfully. Mar 4 00:52:15.570932 containerd[1831]: time="2026-03-04T00:52:15.570882446Z" level=info msg="StartContainer for \"2a2a1f680a4a8618a7ae598b0e58fc3341e71e5d9273e291176b5765372930e3\" returns successfully" Mar 4 00:52:15.990114 containerd[1831]: time="2026-03-04T00:52:15.990071180Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:15.996146 containerd[1831]: time="2026-03-04T00:52:15.994926976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 4 00:52:15.997354 containerd[1831]: time="2026-03-04T00:52:15.997316174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 576.614444ms" Mar 4 00:52:15.997419 containerd[1831]: time="2026-03-04T00:52:15.997357614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 4 00:52:15.998872 containerd[1831]: time="2026-03-04T00:52:15.998804853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 4 00:52:16.004651 containerd[1831]: time="2026-03-04T00:52:16.004612248Z" level=info msg="CreateContainer within sandbox \"026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 00:52:16.043019 containerd[1831]: time="2026-03-04T00:52:16.042975336Z" level=info msg="CreateContainer within sandbox \"026b19d701f9e43ea03a3ae0b264eae8e21c52e8e913b6e875e3859ee619ca57\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"af43a727a4af3749cc91eb1c1fc1adce1ad78902414124ab352a3addca1a95b4\"" Mar 4 00:52:16.043592 containerd[1831]: time="2026-03-04T00:52:16.043571656Z" level=info msg="StartContainer for \"af43a727a4af3749cc91eb1c1fc1adce1ad78902414124ab352a3addca1a95b4\"" Mar 4 00:52:16.102622 containerd[1831]: time="2026-03-04T00:52:16.102557847Z" level=info msg="StartContainer for \"af43a727a4af3749cc91eb1c1fc1adce1ad78902414124ab352a3addca1a95b4\" returns successfully" Mar 4 00:52:16.230137 kubelet[3342]: I0304 00:52:16.230072 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c45f96d9c-8mm4l" podStartSLOduration=22.777485026 podStartE2EDuration="30.230054462s" podCreationTimestamp="2026-03-04 00:51:46 +0000 UTC" firstStartedPulling="2026-03-04 00:52:08.545405977 +0000 UTC m=+41.813689451" lastFinishedPulling="2026-03-04 00:52:15.997975373 +0000 UTC m=+49.266258887" observedRunningTime="2026-03-04 00:52:16.20816752 +0000 UTC m=+49.476450994" watchObservedRunningTime="2026-03-04 00:52:16.230054462 +0000 UTC m=+49.498337936" Mar 4 00:52:16.230708 kubelet[3342]: I0304 00:52:16.230295 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-26kr2" podStartSLOduration=23.252362638 podStartE2EDuration="30.230290182s" podCreationTimestamp="2026-03-04 00:51:46 +0000 UTC" firstStartedPulling="2026-03-04 00:52:08.441627027 +0000 UTC m=+41.709910501" lastFinishedPulling="2026-03-04 00:52:15.419554571 +0000 UTC m=+48.687838045" observedRunningTime="2026-03-04 00:52:16.228933583 +0000 UTC m=+49.497217057" watchObservedRunningTime="2026-03-04 00:52:16.230290182 +0000 UTC m=+49.498573656" Mar 4 00:52:17.196958 kubelet[3342]: I0304 00:52:17.196920 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:52:17.760035 containerd[1831]: time="2026-03-04T00:52:17.759981159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:17.764668 containerd[1831]: time="2026-03-04T00:52:17.764167436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 4 00:52:17.769121 containerd[1831]: time="2026-03-04T00:52:17.768918832Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:17.774819 containerd[1831]: time="2026-03-04T00:52:17.774665307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:17.776158 containerd[1831]: time="2026-03-04T00:52:17.775923506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.777081933s" Mar 4 00:52:17.776158 containerd[1831]: time="2026-03-04T00:52:17.775960226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 4 00:52:17.778273 containerd[1831]: time="2026-03-04T00:52:17.777617504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 4 00:52:17.798623 containerd[1831]: time="2026-03-04T00:52:17.798448247Z" level=info msg="CreateContainer within sandbox \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 00:52:17.849848 containerd[1831]: time="2026-03-04T00:52:17.849476325Z" level=info msg="CreateContainer within sandbox \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\"" Mar 4 00:52:17.850709 containerd[1831]: time="2026-03-04T00:52:17.850665564Z" level=info msg="StartContainer for \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\"" Mar 4 00:52:17.930400 containerd[1831]: time="2026-03-04T00:52:17.930356658Z" level=info msg="StartContainer for \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\" returns successfully" Mar 4 00:52:18.201233 kubelet[3342]: I0304 00:52:18.201173 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:52:18.873473 kubelet[3342]: I0304 00:52:18.873387 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:52:19.143770 containerd[1831]: time="2026-03-04T00:52:19.142912617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:19.145982 containerd[1831]: time="2026-03-04T00:52:19.145945295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 4 00:52:19.149696 containerd[1831]: time="2026-03-04T00:52:19.149645932Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:19.154859 containerd[1831]: time="2026-03-04T00:52:19.154803368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:19.155980 containerd[1831]: time="2026-03-04T00:52:19.155475167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.377478983s" Mar 4 00:52:19.155980 containerd[1831]: time="2026-03-04T00:52:19.155506487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 4 00:52:19.157696 containerd[1831]: time="2026-03-04T00:52:19.157482645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 4 00:52:19.165796 containerd[1831]: time="2026-03-04T00:52:19.165756239Z" level=info msg="CreateContainer within sandbox \"4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 4 00:52:19.211841 containerd[1831]: time="2026-03-04T00:52:19.211655601Z" level=info msg="CreateContainer within sandbox \"4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"058f6304e2e49533b347438c7f27f847cf340a1edfe85f83880abdf5fddee7d8\"" Mar 4 00:52:19.215207 containerd[1831]: time="2026-03-04T00:52:19.214412158Z" level=info msg="StartContainer for \"058f6304e2e49533b347438c7f27f847cf340a1edfe85f83880abdf5fddee7d8\"" Mar 4 00:52:19.295934 containerd[1831]: time="2026-03-04T00:52:19.295890891Z" level=info msg="StartContainer for \"058f6304e2e49533b347438c7f27f847cf340a1edfe85f83880abdf5fddee7d8\" returns successfully" Mar 4 00:52:20.883843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1616348739.mount: Deactivated successfully. Mar 4 00:52:20.961283 containerd[1831]: time="2026-03-04T00:52:20.960488757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:20.963545 containerd[1831]: time="2026-03-04T00:52:20.963511915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 4 00:52:20.971593 containerd[1831]: time="2026-03-04T00:52:20.969902949Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:20.982418 containerd[1831]: time="2026-03-04T00:52:20.982371459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:20.984533 containerd[1831]: time="2026-03-04T00:52:20.984494617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.826975532s" Mar 4 00:52:20.985051 containerd[1831]: time="2026-03-04T00:52:20.984657137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 4 00:52:20.988433 containerd[1831]: time="2026-03-04T00:52:20.988355694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 4 00:52:20.995523 containerd[1831]: time="2026-03-04T00:52:20.995489048Z" level=info msg="CreateContainer within sandbox \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 00:52:21.044663 containerd[1831]: time="2026-03-04T00:52:21.044613168Z" level=info msg="CreateContainer within sandbox \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\"" Mar 4 00:52:21.046858 containerd[1831]: time="2026-03-04T00:52:21.046556726Z" level=info msg="StartContainer for \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\"" Mar 4 00:52:21.089084 systemd[1]: run-containerd-runc-k8s.io-6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5-runc.p05JkZ.mount: Deactivated successfully. Mar 4 00:52:21.138585 containerd[1831]: time="2026-03-04T00:52:21.138464290Z" level=info msg="StartContainer for \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\" returns successfully" Mar 4 00:52:21.212220 containerd[1831]: time="2026-03-04T00:52:21.212181709Z" level=info msg="StopContainer for \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\" with timeout 30 (s)" Mar 4 00:52:21.212366 containerd[1831]: time="2026-03-04T00:52:21.212244309Z" level=info msg="StopContainer for \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\" with timeout 30 (s)" Mar 4 00:52:21.213865 containerd[1831]: time="2026-03-04T00:52:21.213827228Z" level=info msg="Stop container \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\" with signal terminated" Mar 4 00:52:21.215554 containerd[1831]: time="2026-03-04T00:52:21.215281867Z" level=info msg="Stop container \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\" with signal terminated" Mar 4 00:52:21.234922 kubelet[3342]: I0304 00:52:21.234798 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b4866d8d5-skhsh" podStartSLOduration=16.940194044 podStartE2EDuration="29.234781491s" podCreationTimestamp="2026-03-04 00:51:52 +0000 UTC" firstStartedPulling="2026-03-04 00:52:08.691899209 +0000 UTC m=+41.960182683" lastFinishedPulling="2026-03-04 00:52:20.986486656 +0000 UTC m=+54.254770130" observedRunningTime="2026-03-04 00:52:21.233251972 +0000 UTC m=+54.501535446" watchObservedRunningTime="2026-03-04 00:52:21.234781491 +0000 UTC m=+54.503064965" Mar 4 00:52:21.277381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5-rootfs.mount: Deactivated successfully. Mar 4 00:52:21.294443 containerd[1831]: time="2026-03-04T00:52:21.294377242Z" level=info msg="shim disconnected" id=9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09 namespace=k8s.io Mar 4 00:52:21.294443 containerd[1831]: time="2026-03-04T00:52:21.294430762Z" level=warning msg="cleaning up after shim disconnected" id=9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09 namespace=k8s.io Mar 4 00:52:21.294443 containerd[1831]: time="2026-03-04T00:52:21.294439442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:52:22.025043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09-rootfs.mount: Deactivated successfully. Mar 4 00:52:22.531587 containerd[1831]: time="2026-03-04T00:52:22.531395887Z" level=info msg="shim disconnected" id=6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5 namespace=k8s.io Mar 4 00:52:22.531587 containerd[1831]: time="2026-03-04T00:52:22.531447887Z" level=warning msg="cleaning up after shim disconnected" id=6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5 namespace=k8s.io Mar 4 00:52:22.531587 containerd[1831]: time="2026-03-04T00:52:22.531455887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:52:22.542820 containerd[1831]: time="2026-03-04T00:52:22.541791238Z" level=info msg="StopContainer for \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\" returns successfully" Mar 4 00:52:22.555447 containerd[1831]: time="2026-03-04T00:52:22.555257148Z" level=warning msg="cleanup warnings time=\"2026-03-04T00:52:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 4 00:52:22.565276 containerd[1831]: time="2026-03-04T00:52:22.565062100Z" level=info msg="StopContainer for \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\" returns successfully" Mar 4 00:52:22.566011 containerd[1831]: time="2026-03-04T00:52:22.565982339Z" level=info msg="StopPodSandbox for \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\"" Mar 4 00:52:22.566083 containerd[1831]: time="2026-03-04T00:52:22.566033339Z" level=info msg="Container to stop \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 00:52:22.566083 containerd[1831]: time="2026-03-04T00:52:22.566045459Z" level=info msg="Container to stop \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 00:52:22.572185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5-shm.mount: Deactivated successfully. Mar 4 00:52:22.615827 containerd[1831]: time="2026-03-04T00:52:22.615744899Z" level=info msg="shim disconnected" id=3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5 namespace=k8s.io Mar 4 00:52:22.615827 containerd[1831]: time="2026-03-04T00:52:22.615802259Z" level=warning msg="cleaning up after shim disconnected" id=3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5 namespace=k8s.io Mar 4 00:52:22.615827 containerd[1831]: time="2026-03-04T00:52:22.615812579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:52:22.621001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5-rootfs.mount: Deactivated successfully. Mar 4 00:52:22.758428 systemd-networkd[1371]: calibd8567c0071: Link DOWN Mar 4 00:52:22.760948 systemd-networkd[1371]: calibd8567c0071: Lost carrier Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.754 [INFO][5632] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.755 [INFO][5632] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" iface="eth0" netns="/var/run/netns/cni-6c7caf0e-2ad5-9e7b-2be6-fe63a11ee34e" Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.755 [INFO][5632] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" iface="eth0" netns="/var/run/netns/cni-6c7caf0e-2ad5-9e7b-2be6-fe63a11ee34e" Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.766 [INFO][5632] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" after=11.555031ms iface="eth0" netns="/var/run/netns/cni-6c7caf0e-2ad5-9e7b-2be6-fe63a11ee34e" Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.766 [INFO][5632] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.766 [INFO][5632] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.796 [INFO][5639] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.796 [INFO][5639] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.796 [INFO][5639] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.878 [INFO][5639] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.878 [INFO][5639] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.888 [INFO][5639] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:22.901424 containerd[1831]: 2026-03-04 00:52:22.894 [INFO][5632] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:22.905334 containerd[1831]: time="2026-03-04T00:52:22.904381107Z" level=info msg="TearDown network for sandbox \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\" successfully" Mar 4 00:52:22.905334 containerd[1831]: time="2026-03-04T00:52:22.904415707Z" level=info msg="StopPodSandbox for \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\" returns successfully" Mar 4 00:52:22.908366 systemd[1]: run-netns-cni\x2d6c7caf0e\x2d2ad5\x2d9e7b\x2d2be6\x2dfe63a11ee34e.mount: Deactivated successfully. Mar 4 00:52:22.940493 kubelet[3342]: I0304 00:52:22.940449 3342 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/875baab3-ef36-432f-9e30-a2535266fb28-whisker-backend-key-pair\") pod \"875baab3-ef36-432f-9e30-a2535266fb28\" (UID: \"875baab3-ef36-432f-9e30-a2535266fb28\") " Mar 4 00:52:22.940894 kubelet[3342]: I0304 00:52:22.940511 3342 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-686r4\" (UniqueName: \"kubernetes.io/projected/875baab3-ef36-432f-9e30-a2535266fb28-kube-api-access-686r4\") pod \"875baab3-ef36-432f-9e30-a2535266fb28\" (UID: \"875baab3-ef36-432f-9e30-a2535266fb28\") " Mar 4 00:52:22.940894 kubelet[3342]: I0304 00:52:22.940539 3342 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/875baab3-ef36-432f-9e30-a2535266fb28-whisker-ca-bundle\") pod \"875baab3-ef36-432f-9e30-a2535266fb28\" (UID: \"875baab3-ef36-432f-9e30-a2535266fb28\") " Mar 4 00:52:22.940894 kubelet[3342]: I0304 00:52:22.940559 3342 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/875baab3-ef36-432f-9e30-a2535266fb28-nginx-config\") pod \"875baab3-ef36-432f-9e30-a2535266fb28\" (UID: \"875baab3-ef36-432f-9e30-a2535266fb28\") " Mar 4 00:52:22.954943 systemd[1]: var-lib-kubelet-pods-875baab3\x2def36\x2d432f\x2d9e30\x2da2535266fb28-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d686r4.mount: Deactivated successfully. Mar 4 00:52:22.965554 kubelet[3342]: I0304 00:52:22.965515 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875baab3-ef36-432f-9e30-a2535266fb28-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "875baab3-ef36-432f-9e30-a2535266fb28" (UID: "875baab3-ef36-432f-9e30-a2535266fb28"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 00:52:22.965827 kubelet[3342]: I0304 00:52:22.965777 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875baab3-ef36-432f-9e30-a2535266fb28-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "875baab3-ef36-432f-9e30-a2535266fb28" (UID: "875baab3-ef36-432f-9e30-a2535266fb28"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 00:52:22.966153 kubelet[3342]: I0304 00:52:22.965770 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875baab3-ef36-432f-9e30-a2535266fb28-kube-api-access-686r4" (OuterVolumeSpecName: "kube-api-access-686r4") pod "875baab3-ef36-432f-9e30-a2535266fb28" (UID: "875baab3-ef36-432f-9e30-a2535266fb28"). InnerVolumeSpecName "kube-api-access-686r4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 00:52:22.966153 kubelet[3342]: I0304 00:52:22.965950 3342 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875baab3-ef36-432f-9e30-a2535266fb28-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "875baab3-ef36-432f-9e30-a2535266fb28" (UID: "875baab3-ef36-432f-9e30-a2535266fb28"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 00:52:23.025480 systemd[1]: var-lib-kubelet-pods-875baab3\x2def36\x2d432f\x2d9e30\x2da2535266fb28-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 4 00:52:23.041752 kubelet[3342]: I0304 00:52:23.041709 3342 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/875baab3-ef36-432f-9e30-a2535266fb28-nginx-config\") on node \"ci-4081.3.6-n-4860195aa5\" DevicePath \"\"" Mar 4 00:52:23.041752 kubelet[3342]: I0304 00:52:23.041747 3342 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/875baab3-ef36-432f-9e30-a2535266fb28-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-4860195aa5\" DevicePath \"\"" Mar 4 00:52:23.041752 kubelet[3342]: I0304 00:52:23.041759 3342 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-686r4\" (UniqueName: \"kubernetes.io/projected/875baab3-ef36-432f-9e30-a2535266fb28-kube-api-access-686r4\") on node \"ci-4081.3.6-n-4860195aa5\" DevicePath \"\"" Mar 4 00:52:23.041947 kubelet[3342]: I0304 00:52:23.041768 3342 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/875baab3-ef36-432f-9e30-a2535266fb28-whisker-ca-bundle\") on node \"ci-4081.3.6-n-4860195aa5\" DevicePath \"\"" Mar 4 00:52:23.218799 kubelet[3342]: I0304 00:52:23.216585 3342 scope.go:117] "RemoveContainer" containerID="6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5" Mar 4 00:52:23.230471 containerd[1831]: time="2026-03-04T00:52:23.230431246Z" level=info msg="RemoveContainer for \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\"" Mar 4 00:52:23.246944 containerd[1831]: time="2026-03-04T00:52:23.246833912Z" level=info msg="RemoveContainer for \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\" returns successfully" Mar 4 00:52:23.248337 kubelet[3342]: I0304 00:52:23.247902 3342 scope.go:117] "RemoveContainer" containerID="9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09" Mar 4 00:52:23.252362 containerd[1831]: time="2026-03-04T00:52:23.252333228Z" level=info msg="RemoveContainer for \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\"" Mar 4 00:52:23.261661 containerd[1831]: time="2026-03-04T00:52:23.261608421Z" level=info msg="RemoveContainer for \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\" returns successfully" Mar 4 00:52:23.267352 kubelet[3342]: I0304 00:52:23.267322 3342 scope.go:117] "RemoveContainer" containerID="6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5" Mar 4 00:52:23.267919 containerd[1831]: time="2026-03-04T00:52:23.267872095Z" level=error msg="ContainerStatus for \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\": not found" Mar 4 00:52:23.270875 kubelet[3342]: E0304 00:52:23.270831 3342 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\": not found" containerID="6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5" Mar 4 00:52:23.281115 kubelet[3342]: I0304 00:52:23.280637 3342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5"} err="failed to get container status \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\": not found" Mar 4 00:52:23.282690 kubelet[3342]: I0304 00:52:23.281561 3342 scope.go:117] "RemoveContainer" containerID="9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09" Mar 4 00:52:23.283570 containerd[1831]: time="2026-03-04T00:52:23.283037283Z" level=error msg="ContainerStatus for \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\": not found" Mar 4 00:52:23.284483 kubelet[3342]: E0304 00:52:23.284232 3342 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\": not found" containerID="9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09" Mar 4 00:52:23.284483 kubelet[3342]: I0304 00:52:23.284273 3342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09"} err="failed to get container status \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\": not found" Mar 4 00:52:23.286361 kubelet[3342]: I0304 00:52:23.286327 3342 scope.go:117] "RemoveContainer" containerID="6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5" Mar 4 00:52:23.287284 containerd[1831]: time="2026-03-04T00:52:23.287230480Z" level=error msg="ContainerStatus for \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\": not found" Mar 4 00:52:23.287723 kubelet[3342]: I0304 00:52:23.287583 3342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5"} err="failed to get container status \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"6082a863eef5b55ecb835f0a2651557c8c8ce6f00ebed20bf43c818ad4fdb6a5\": not found" Mar 4 00:52:23.287723 kubelet[3342]: I0304 00:52:23.287616 3342 scope.go:117] "RemoveContainer" containerID="9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09" Mar 4 00:52:23.288223 containerd[1831]: time="2026-03-04T00:52:23.288169119Z" level=error msg="ContainerStatus for \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\": not found" Mar 4 00:52:23.288556 kubelet[3342]: I0304 00:52:23.288384 3342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09"} err="failed to get container status \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f2fa1b038b55e2dbbe3ad8063d46a32e248cd9f614e57ce4e29f3e7072c8c09\": not found" Mar 4 00:52:23.345342 kubelet[3342]: I0304 00:52:23.345232 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/17b86d9b-fdec-47f6-b010-5d7b0f7487c6-nginx-config\") pod \"whisker-6dc95947c8-c5rr7\" (UID: \"17b86d9b-fdec-47f6-b010-5d7b0f7487c6\") " pod="calico-system/whisker-6dc95947c8-c5rr7" Mar 4 00:52:23.345342 kubelet[3342]: I0304 00:52:23.345282 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17b86d9b-fdec-47f6-b010-5d7b0f7487c6-whisker-backend-key-pair\") pod \"whisker-6dc95947c8-c5rr7\" (UID: \"17b86d9b-fdec-47f6-b010-5d7b0f7487c6\") " pod="calico-system/whisker-6dc95947c8-c5rr7" Mar 4 00:52:23.345342 kubelet[3342]: I0304 00:52:23.345301 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17b86d9b-fdec-47f6-b010-5d7b0f7487c6-whisker-ca-bundle\") pod \"whisker-6dc95947c8-c5rr7\" (UID: \"17b86d9b-fdec-47f6-b010-5d7b0f7487c6\") " pod="calico-system/whisker-6dc95947c8-c5rr7" Mar 4 00:52:23.345342 kubelet[3342]: I0304 00:52:23.345330 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87g6c\" (UniqueName: \"kubernetes.io/projected/17b86d9b-fdec-47f6-b010-5d7b0f7487c6-kube-api-access-87g6c\") pod \"whisker-6dc95947c8-c5rr7\" (UID: \"17b86d9b-fdec-47f6-b010-5d7b0f7487c6\") " pod="calico-system/whisker-6dc95947c8-c5rr7" Mar 4 00:52:23.621568 containerd[1831]: time="2026-03-04T00:52:23.619328093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:23.621568 containerd[1831]: time="2026-03-04T00:52:23.621480212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dc95947c8-c5rr7,Uid:17b86d9b-fdec-47f6-b010-5d7b0f7487c6,Namespace:calico-system,Attempt:0,}" Mar 4 00:52:23.627041 containerd[1831]: time="2026-03-04T00:52:23.626910527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 4 00:52:23.629219 containerd[1831]: time="2026-03-04T00:52:23.629192365Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:23.638355 containerd[1831]: time="2026-03-04T00:52:23.638288158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:52:23.639462 containerd[1831]: time="2026-03-04T00:52:23.639094397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.650512223s" Mar 4 00:52:23.639462 containerd[1831]: time="2026-03-04T00:52:23.639133077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 4 00:52:23.652022 containerd[1831]: time="2026-03-04T00:52:23.651805667Z" level=info msg="CreateContainer within sandbox \"4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 4 00:52:23.717702 containerd[1831]: time="2026-03-04T00:52:23.717566334Z" level=info msg="CreateContainer within sandbox \"4437aa813bcf4e5c06f988d3589fb74f76ab9273b9fcef7ce516e9d64baf6eea\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f86f5f81e762b128f0ce2fb79ed4806875125b162b060c40b05cfb7ac6572b53\"" Mar 4 00:52:23.719025 containerd[1831]: time="2026-03-04T00:52:23.718892893Z" level=info msg="StartContainer for \"f86f5f81e762b128f0ce2fb79ed4806875125b162b060c40b05cfb7ac6572b53\"" Mar 4 00:52:23.791837 containerd[1831]: time="2026-03-04T00:52:23.791795475Z" level=info msg="StartContainer for \"f86f5f81e762b128f0ce2fb79ed4806875125b162b060c40b05cfb7ac6572b53\" returns successfully" Mar 4 00:52:23.841750 systemd-networkd[1371]: cali6719f0ad9e4: Link UP Mar 4 00:52:23.841974 systemd-networkd[1371]: cali6719f0ad9e4: Gained carrier Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.741 [INFO][5676] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0 whisker-6dc95947c8- calico-system 17b86d9b-fdec-47f6-b010-5d7b0f7487c6 1070 0 2026-03-04 00:52:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6dc95947c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-4860195aa5 whisker-6dc95947c8-c5rr7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6719f0ad9e4 [] [] }} ContainerID="d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" Namespace="calico-system" Pod="whisker-6dc95947c8-c5rr7" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.741 [INFO][5676] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" Namespace="calico-system" Pod="whisker-6dc95947c8-c5rr7" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.784 [INFO][5705] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" HandleID="k8s-pod-network.d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.797 [INFO][5705] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" HandleID="k8s-pod-network.d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002737c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4860195aa5", "pod":"whisker-6dc95947c8-c5rr7", "timestamp":"2026-03-04 00:52:23.784047081 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4860195aa5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030d8c0)} Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.797 [INFO][5705] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.797 [INFO][5705] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.797 [INFO][5705] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4860195aa5' Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.799 [INFO][5705] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.803 [INFO][5705] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.807 [INFO][5705] ipam/ipam.go 526: Trying affinity for 192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.808 [INFO][5705] ipam/ipam.go 160: Attempting to load block cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.810 [INFO][5705] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.810 [INFO][5705] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.811 [INFO][5705] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289 Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.820 [INFO][5705] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.833 [INFO][5705] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.47.137/26] block=192.168.47.128/26 handle="k8s-pod-network.d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.833 [INFO][5705] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.47.137/26] handle="k8s-pod-network.d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" host="ci-4081.3.6-n-4860195aa5" Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.833 [INFO][5705] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:23.864221 containerd[1831]: 2026-03-04 00:52:23.833 [INFO][5705] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.47.137/26] IPv6=[] ContainerID="d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" HandleID="k8s-pod-network.d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0" Mar 4 00:52:23.865409 containerd[1831]: 2026-03-04 00:52:23.837 [INFO][5676] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" Namespace="calico-system" Pod="whisker-6dc95947c8-c5rr7" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0", GenerateName:"whisker-6dc95947c8-", Namespace:"calico-system", SelfLink:"", UID:"17b86d9b-fdec-47f6-b010-5d7b0f7487c6", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 52, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dc95947c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"", Pod:"whisker-6dc95947c8-c5rr7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6719f0ad9e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:23.865409 containerd[1831]: 2026-03-04 00:52:23.837 [INFO][5676] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.137/32] ContainerID="d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" Namespace="calico-system" Pod="whisker-6dc95947c8-c5rr7" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0" Mar 4 00:52:23.865409 containerd[1831]: 2026-03-04 00:52:23.837 [INFO][5676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6719f0ad9e4 ContainerID="d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" Namespace="calico-system" Pod="whisker-6dc95947c8-c5rr7" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0" Mar 4 00:52:23.865409 containerd[1831]: 2026-03-04 00:52:23.844 [INFO][5676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" Namespace="calico-system" Pod="whisker-6dc95947c8-c5rr7" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0" Mar 4 00:52:23.865409 containerd[1831]: 2026-03-04 00:52:23.845 [INFO][5676] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" Namespace="calico-system" Pod="whisker-6dc95947c8-c5rr7" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0", GenerateName:"whisker-6dc95947c8-", Namespace:"calico-system", SelfLink:"", UID:"17b86d9b-fdec-47f6-b010-5d7b0f7487c6", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 52, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dc95947c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4860195aa5", ContainerID:"d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289", Pod:"whisker-6dc95947c8-c5rr7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6719f0ad9e4", MAC:"ea:1f:91:6c:41:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:52:23.865409 containerd[1831]: 2026-03-04 00:52:23.861 [INFO][5676] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289" Namespace="calico-system" Pod="whisker-6dc95947c8-c5rr7" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--6dc95947c8--c5rr7-eth0" Mar 4 00:52:23.889972 containerd[1831]: time="2026-03-04T00:52:23.889325437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:52:23.889972 containerd[1831]: time="2026-03-04T00:52:23.889382277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:52:23.889972 containerd[1831]: time="2026-03-04T00:52:23.889399077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:23.889972 containerd[1831]: time="2026-03-04T00:52:23.889485236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:52:23.937765 containerd[1831]: time="2026-03-04T00:52:23.937732398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dc95947c8-c5rr7,Uid:17b86d9b-fdec-47f6-b010-5d7b0f7487c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289\"" Mar 4 00:52:23.949699 containerd[1831]: time="2026-03-04T00:52:23.949551868Z" level=info msg="CreateContainer within sandbox \"d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 00:52:23.963034 kubelet[3342]: I0304 00:52:23.962987 3342 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 4 00:52:23.963034 kubelet[3342]: I0304 00:52:23.963034 3342 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 4 00:52:23.989419 containerd[1831]: time="2026-03-04T00:52:23.989268836Z" level=info msg="CreateContainer within sandbox \"d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"981b85a0cd9df72bdba8575693d841e9abad8f88a298f4a2237c066f449c35f4\"" Mar 4 00:52:23.989776 containerd[1831]: time="2026-03-04T00:52:23.989756796Z" level=info msg="StartContainer for \"981b85a0cd9df72bdba8575693d841e9abad8f88a298f4a2237c066f449c35f4\"" Mar 4 00:52:24.055515 containerd[1831]: time="2026-03-04T00:52:24.055479063Z" level=info msg="StartContainer for \"981b85a0cd9df72bdba8575693d841e9abad8f88a298f4a2237c066f449c35f4\" returns successfully" Mar 4 00:52:24.067633 containerd[1831]: time="2026-03-04T00:52:24.067585893Z" level=info msg="CreateContainer within sandbox \"d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 00:52:24.106164 containerd[1831]: time="2026-03-04T00:52:24.106084103Z" level=info msg="CreateContainer within sandbox \"d861feddd861c49309d86ee2b021e6049dcfcaa526bf48912ce16afa34f8d289\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"92dc93b65fad997ab9caaf6f264f9d6718d6179599fc410392d9d3980790bfdf\"" Mar 4 00:52:24.107522 containerd[1831]: time="2026-03-04T00:52:24.106645782Z" level=info msg="StartContainer for \"92dc93b65fad997ab9caaf6f264f9d6718d6179599fc410392d9d3980790bfdf\"" Mar 4 00:52:24.163946 containerd[1831]: time="2026-03-04T00:52:24.163847936Z" level=info msg="StartContainer for \"92dc93b65fad997ab9caaf6f264f9d6718d6179599fc410392d9d3980790bfdf\" returns successfully" Mar 4 00:52:24.245846 kubelet[3342]: I0304 00:52:24.245624 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6dc95947c8-c5rr7" podStartSLOduration=1.245599351 podStartE2EDuration="1.245599351s" podCreationTimestamp="2026-03-04 00:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:52:24.243163312 +0000 UTC m=+57.511446786" watchObservedRunningTime="2026-03-04 00:52:24.245599351 +0000 UTC m=+57.513882825" Mar 4 00:52:24.267857 kubelet[3342]: I0304 00:52:24.267564 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b4l6w" podStartSLOduration=21.10786662 podStartE2EDuration="35.267550653s" podCreationTimestamp="2026-03-04 00:51:49 +0000 UTC" firstStartedPulling="2026-03-04 00:52:09.481167523 +0000 UTC m=+42.749450997" lastFinishedPulling="2026-03-04 00:52:23.640851556 +0000 UTC m=+56.909135030" observedRunningTime="2026-03-04 00:52:24.267547733 +0000 UTC m=+57.535831207" watchObservedRunningTime="2026-03-04 00:52:24.267550653 +0000 UTC m=+57.535834127" Mar 4 00:52:24.850481 kubelet[3342]: I0304 00:52:24.850444 3342 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="875baab3-ef36-432f-9e30-a2535266fb28" path="/var/lib/kubelet/pods/875baab3-ef36-432f-9e30-a2535266fb28/volumes" Mar 4 00:52:25.545540 systemd-networkd[1371]: cali6719f0ad9e4: Gained IPv6LL Mar 4 00:52:26.859279 containerd[1831]: time="2026-03-04T00:52:26.859238012Z" level=info msg="StopPodSandbox for \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\"" Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.895 [WARNING][5881] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.896 [INFO][5881] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.896 [INFO][5881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" iface="eth0" netns="" Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.896 [INFO][5881] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.896 [INFO][5881] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.919 [INFO][5888] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.919 [INFO][5888] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.919 [INFO][5888] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.929 [WARNING][5888] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.929 [INFO][5888] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.930 [INFO][5888] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:26.935850 containerd[1831]: 2026-03-04 00:52:26.932 [INFO][5881] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:26.935850 containerd[1831]: time="2026-03-04T00:52:26.935631871Z" level=info msg="TearDown network for sandbox \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\" successfully" Mar 4 00:52:26.935850 containerd[1831]: time="2026-03-04T00:52:26.935653471Z" level=info msg="StopPodSandbox for \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\" returns successfully" Mar 4 00:52:26.941634 containerd[1831]: time="2026-03-04T00:52:26.941360186Z" level=info msg="RemovePodSandbox for \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\"" Mar 4 00:52:26.941634 containerd[1831]: time="2026-03-04T00:52:26.941391546Z" level=info msg="Forcibly stopping sandbox \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\"" Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:26.974 [WARNING][5903] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" WorkloadEndpoint="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:26.974 [INFO][5903] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:26.974 [INFO][5903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" iface="eth0" netns="" Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:26.974 [INFO][5903] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:26.974 [INFO][5903] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:26.991 [INFO][5910] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:26.991 [INFO][5910] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:26.991 [INFO][5910] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:27.000 [WARNING][5910] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:27.000 [INFO][5910] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" HandleID="k8s-pod-network.3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Workload="ci--4081.3.6--n--4860195aa5-k8s-whisker--5b4866d8d5--skhsh-eth0" Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:27.001 [INFO][5910] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:52:27.004935 containerd[1831]: 2026-03-04 00:52:27.003 [INFO][5903] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5" Mar 4 00:52:27.005341 containerd[1831]: time="2026-03-04T00:52:27.004983415Z" level=info msg="TearDown network for sandbox \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\" successfully" Mar 4 00:52:27.020673 containerd[1831]: time="2026-03-04T00:52:27.020626123Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:52:27.020833 containerd[1831]: time="2026-03-04T00:52:27.020708323Z" level=info msg="RemovePodSandbox \"3762def850c92f23c8107245dfc9b19fde6e836a7772cfc5f4f341749dcf67f5\" returns successfully" Mar 4 00:52:49.526810 kubelet[3342]: I0304 00:52:49.526773 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:52:51.340631 systemd[1]: run-containerd-runc-k8s.io-2a2a1f680a4a8618a7ae598b0e58fc3341e71e5d9273e291176b5765372930e3-runc.xPmRvp.mount: Deactivated successfully. Mar 4 00:53:12.946666 systemd[1]: Started sshd@7-10.200.20.14:22-10.200.16.10:53038.service - OpenSSH per-connection server daemon (10.200.16.10:53038). Mar 4 00:53:13.435529 sshd[6082]: Accepted publickey for core from 10.200.16.10 port 53038 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:13.437623 sshd[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:13.442107 systemd-logind[1769]: New session 10 of user core. Mar 4 00:53:13.448659 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 00:53:13.919535 sshd[6082]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:13.923301 systemd[1]: sshd@7-10.200.20.14:22-10.200.16.10:53038.service: Deactivated successfully. Mar 4 00:53:13.927037 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 00:53:13.928522 systemd-logind[1769]: Session 10 logged out. Waiting for processes to exit. Mar 4 00:53:13.929317 systemd-logind[1769]: Removed session 10. Mar 4 00:53:19.015585 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:53046.service - OpenSSH per-connection server daemon (10.200.16.10:53046). Mar 4 00:53:19.506803 sshd[6160]: Accepted publickey for core from 10.200.16.10 port 53046 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:19.507921 sshd[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:19.513590 systemd-logind[1769]: New session 11 of user core. Mar 4 00:53:19.518984 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 00:53:19.921586 sshd[6160]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:19.925342 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:53046.service: Deactivated successfully. Mar 4 00:53:19.928771 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 00:53:19.929841 systemd-logind[1769]: Session 11 logged out. Waiting for processes to exit. Mar 4 00:53:19.930750 systemd-logind[1769]: Removed session 11. Mar 4 00:53:25.005562 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:46184.service - OpenSSH per-connection server daemon (10.200.16.10:46184). Mar 4 00:53:25.491850 sshd[6174]: Accepted publickey for core from 10.200.16.10 port 46184 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:25.492742 sshd[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:25.496762 systemd-logind[1769]: New session 12 of user core. Mar 4 00:53:25.503570 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 00:53:25.906534 sshd[6174]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:25.910239 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:46184.service: Deactivated successfully. Mar 4 00:53:25.913925 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 00:53:25.916068 systemd-logind[1769]: Session 12 logged out. Waiting for processes to exit. Mar 4 00:53:25.916992 systemd-logind[1769]: Removed session 12. Mar 4 00:53:30.990545 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:56402.service - OpenSSH per-connection server daemon (10.200.16.10:56402). Mar 4 00:53:31.477859 sshd[6205]: Accepted publickey for core from 10.200.16.10 port 56402 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:31.479161 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:31.483607 systemd-logind[1769]: New session 13 of user core. Mar 4 00:53:31.488587 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 00:53:31.915740 sshd[6205]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:31.920711 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:56402.service: Deactivated successfully. Mar 4 00:53:31.923707 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 00:53:31.924596 systemd-logind[1769]: Session 13 logged out. Waiting for processes to exit. Mar 4 00:53:31.925542 systemd-logind[1769]: Removed session 13. Mar 4 00:53:32.000544 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:56404.service - OpenSSH per-connection server daemon (10.200.16.10:56404). Mar 4 00:53:32.486959 sshd[6240]: Accepted publickey for core from 10.200.16.10 port 56404 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:32.488328 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:32.491916 systemd-logind[1769]: New session 14 of user core. Mar 4 00:53:32.496559 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 00:53:32.941059 sshd[6240]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:32.945558 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:56404.service: Deactivated successfully. Mar 4 00:53:32.948176 systemd-logind[1769]: Session 14 logged out. Waiting for processes to exit. Mar 4 00:53:32.949843 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 00:53:32.952251 systemd-logind[1769]: Removed session 14. Mar 4 00:53:33.027566 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:56412.service - OpenSSH per-connection server daemon (10.200.16.10:56412). Mar 4 00:53:33.519010 sshd[6252]: Accepted publickey for core from 10.200.16.10 port 56412 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:33.520843 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:33.525388 systemd-logind[1769]: New session 15 of user core. Mar 4 00:53:33.528715 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 00:53:33.924844 sshd[6252]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:33.928426 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:56412.service: Deactivated successfully. Mar 4 00:53:33.933345 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 00:53:33.934636 systemd-logind[1769]: Session 15 logged out. Waiting for processes to exit. Mar 4 00:53:33.935612 systemd-logind[1769]: Removed session 15. Mar 4 00:53:39.010567 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:56426.service - OpenSSH per-connection server daemon (10.200.16.10:56426). Mar 4 00:53:39.492863 sshd[6287]: Accepted publickey for core from 10.200.16.10 port 56426 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:39.494208 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:39.498361 systemd-logind[1769]: New session 16 of user core. Mar 4 00:53:39.504611 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 00:53:39.909676 sshd[6287]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:39.912659 systemd-logind[1769]: Session 16 logged out. Waiting for processes to exit. Mar 4 00:53:39.912704 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:56426.service: Deactivated successfully. Mar 4 00:53:39.917515 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 00:53:39.918708 systemd-logind[1769]: Removed session 16. Mar 4 00:53:39.987560 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:56182.service - OpenSSH per-connection server daemon (10.200.16.10:56182). Mar 4 00:53:40.437773 sshd[6300]: Accepted publickey for core from 10.200.16.10 port 56182 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:40.439175 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:40.443366 systemd-logind[1769]: New session 17 of user core. Mar 4 00:53:40.449665 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 00:53:40.976842 sshd[6300]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:40.980378 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:56182.service: Deactivated successfully. Mar 4 00:53:40.983577 systemd-logind[1769]: Session 17 logged out. Waiting for processes to exit. Mar 4 00:53:40.983755 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 00:53:40.987039 systemd-logind[1769]: Removed session 17. Mar 4 00:53:41.042551 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:56188.service - OpenSSH per-connection server daemon (10.200.16.10:56188). Mar 4 00:53:41.526238 sshd[6312]: Accepted publickey for core from 10.200.16.10 port 56188 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:41.529492 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:41.533616 systemd-logind[1769]: New session 18 of user core. Mar 4 00:53:41.541603 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 00:53:42.543907 sshd[6312]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:42.549621 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:56188.service: Deactivated successfully. Mar 4 00:53:42.555140 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 00:53:42.556179 systemd-logind[1769]: Session 18 logged out. Waiting for processes to exit. Mar 4 00:53:42.557188 systemd-logind[1769]: Removed session 18. Mar 4 00:53:42.629591 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:56196.service - OpenSSH per-connection server daemon (10.200.16.10:56196). Mar 4 00:53:43.116032 sshd[6365]: Accepted publickey for core from 10.200.16.10 port 56196 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:43.117749 sshd[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:43.121921 systemd-logind[1769]: New session 19 of user core. Mar 4 00:53:43.124686 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 00:53:43.631576 sshd[6365]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:43.637207 systemd-logind[1769]: Session 19 logged out. Waiting for processes to exit. Mar 4 00:53:43.637861 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:56196.service: Deactivated successfully. Mar 4 00:53:43.640649 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 00:53:43.642717 systemd-logind[1769]: Removed session 19. Mar 4 00:53:43.715768 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:56208.service - OpenSSH per-connection server daemon (10.200.16.10:56208). Mar 4 00:53:44.200714 sshd[6377]: Accepted publickey for core from 10.200.16.10 port 56208 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:44.204546 sshd[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:44.215398 systemd-logind[1769]: New session 20 of user core. Mar 4 00:53:44.219014 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 00:53:44.608065 sshd[6377]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:44.613087 systemd-logind[1769]: Session 20 logged out. Waiting for processes to exit. Mar 4 00:53:44.614277 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:56208.service: Deactivated successfully. Mar 4 00:53:44.618053 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 00:53:44.622386 systemd-logind[1769]: Removed session 20. Mar 4 00:53:49.693555 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:56218.service - OpenSSH per-connection server daemon (10.200.16.10:56218). Mar 4 00:53:50.175760 sshd[6467]: Accepted publickey for core from 10.200.16.10 port 56218 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:50.177421 sshd[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:50.181418 systemd-logind[1769]: New session 21 of user core. Mar 4 00:53:50.190630 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 00:53:50.579131 sshd[6467]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:50.581996 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:56218.service: Deactivated successfully. Mar 4 00:53:50.586680 systemd-logind[1769]: Session 21 logged out. Waiting for processes to exit. Mar 4 00:53:50.587453 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 00:53:50.589024 systemd-logind[1769]: Removed session 21. Mar 4 00:53:55.665679 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:53152.service - OpenSSH per-connection server daemon (10.200.16.10:53152). Mar 4 00:53:56.151759 sshd[6503]: Accepted publickey for core from 10.200.16.10 port 53152 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:53:56.152939 sshd[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:53:56.157440 systemd-logind[1769]: New session 22 of user core. Mar 4 00:53:56.161413 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 00:53:56.563300 sshd[6503]: pam_unix(sshd:session): session closed for user core Mar 4 00:53:56.567453 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:53152.service: Deactivated successfully. Mar 4 00:53:56.570718 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 00:53:56.572458 systemd-logind[1769]: Session 22 logged out. Waiting for processes to exit. Mar 4 00:53:56.573253 systemd-logind[1769]: Removed session 22. Mar 4 00:54:01.648761 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:52656.service - OpenSSH per-connection server daemon (10.200.16.10:52656). Mar 4 00:54:02.137203 sshd[6517]: Accepted publickey for core from 10.200.16.10 port 52656 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:54:02.138133 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:54:02.142612 systemd-logind[1769]: New session 23 of user core. Mar 4 00:54:02.146564 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 00:54:02.548406 sshd[6517]: pam_unix(sshd:session): session closed for user core Mar 4 00:54:02.552117 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:52656.service: Deactivated successfully. Mar 4 00:54:02.555768 systemd-logind[1769]: Session 23 logged out. Waiting for processes to exit. Mar 4 00:54:02.556170 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 00:54:02.557611 systemd-logind[1769]: Removed session 23. Mar 4 00:54:07.636478 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:52660.service - OpenSSH per-connection server daemon (10.200.16.10:52660). Mar 4 00:54:08.126150 sshd[6533]: Accepted publickey for core from 10.200.16.10 port 52660 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:54:08.127647 sshd[6533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:54:08.131882 systemd-logind[1769]: New session 24 of user core. Mar 4 00:54:08.140601 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 00:54:08.538248 sshd[6533]: pam_unix(sshd:session): session closed for user core Mar 4 00:54:08.544379 systemd-logind[1769]: Session 24 logged out. Waiting for processes to exit. Mar 4 00:54:08.545039 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:52660.service: Deactivated successfully. Mar 4 00:54:08.547814 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 00:54:08.548700 systemd-logind[1769]: Removed session 24. Mar 4 00:54:13.623708 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:56530.service - OpenSSH per-connection server daemon (10.200.16.10:56530). Mar 4 00:54:14.109647 sshd[6547]: Accepted publickey for core from 10.200.16.10 port 56530 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:54:14.111031 sshd[6547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:54:14.115795 systemd-logind[1769]: New session 25 of user core. Mar 4 00:54:14.121741 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 00:54:14.550940 sshd[6547]: pam_unix(sshd:session): session closed for user core Mar 4 00:54:14.554467 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:56530.service: Deactivated successfully. Mar 4 00:54:14.557699 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 00:54:14.558660 systemd-logind[1769]: Session 25 logged out. Waiting for processes to exit. Mar 4 00:54:14.559506 systemd-logind[1769]: Removed session 25. Mar 4 00:54:16.209631 systemd[1]: run-containerd-runc-k8s.io-2a2a1f680a4a8618a7ae598b0e58fc3341e71e5d9273e291176b5765372930e3-runc.eb5u7V.mount: Deactivated successfully.