Jan 23 23:52:56.197721 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:52:56.197745 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:52:56.197753 kernel: KASLR enabled Jan 23 23:52:56.197759 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:52:56.197767 kernel: printk: bootconsole [pl11] enabled Jan 23 23:52:56.197772 kernel: efi: EFI v2.7 by EDK II Jan 23 23:52:56.197780 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:52:56.197786 kernel: random: crng init done Jan 23 23:52:56.197792 kernel: ACPI: Early table checksum verification disabled Jan 23 23:52:56.197798 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:52:56.197804 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197811 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197818 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:52:56.197825 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197832 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197838 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197845 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197853 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.198889 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.198902 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:52:56.198909 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.198916 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:52:56.198923 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:52:56.198929 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:52:56.198936 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:52:56.198942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:52:56.198949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:52:56.198955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:52:56.198967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:52:56.198974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:52:56.198980 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:52:56.198986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:52:56.198993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:52:56.198999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:52:56.199005 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 23 23:52:56.199011 kernel: Zone ranges: Jan 23 23:52:56.199018 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:52:56.199024 kernel: DMA32 empty Jan 23 23:52:56.199031 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:52:56.199037 kernel: Movable zone start for each node Jan 23 23:52:56.199048 kernel: Early memory node ranges Jan 23 23:52:56.199055 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:52:56.199062 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:52:56.199069 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:52:56.199075 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:52:56.199083 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:52:56.199090 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:52:56.199097 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:52:56.199104 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:52:56.199111 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:52:56.199118 kernel: psci: probing for conduit method from ACPI. Jan 23 23:52:56.199125 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:52:56.199132 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:52:56.199138 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:52:56.199145 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:52:56.199152 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:52:56.199159 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:52:56.199167 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:52:56.199174 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:52:56.199181 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:52:56.199187 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:52:56.199194 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:52:56.199201 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:52:56.199208 kernel: CPU features: detected: Spectre-BHB Jan 23 23:52:56.199214 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:52:56.199221 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:52:56.199228 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:52:56.199235 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:52:56.199243 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:52:56.199250 kernel: alternatives: applying boot alternatives Jan 23 23:52:56.199259 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:52:56.199266 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:52:56.199273 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:52:56.199280 kernel: Fallback order for Node 0: 0 Jan 23 23:52:56.199286 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:52:56.199293 kernel: Policy zone: Normal Jan 23 23:52:56.199300 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:52:56.199307 kernel: software IO TLB: area num 2. Jan 23 23:52:56.199313 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:52:56.199322 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 23 23:52:56.199329 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:52:56.199336 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:52:56.199344 kernel: rcu: RCU event tracing is enabled. Jan 23 23:52:56.199351 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:52:56.199358 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:52:56.199365 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:52:56.199373 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:52:56.199379 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:52:56.199386 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:52:56.199393 kernel: GICv3: 960 SPIs implemented Jan 23 23:52:56.199401 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:52:56.199408 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:52:56.199414 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:52:56.199421 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:52:56.199428 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:52:56.199435 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:52:56.199442 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:52:56.199449 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:52:56.199456 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:52:56.199462 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:52:56.199470 kernel: Console: colour dummy device 80x25 Jan 23 23:52:56.199478 kernel: printk: console [tty1] enabled Jan 23 23:52:56.199486 kernel: ACPI: Core revision 20230628 Jan 23 23:52:56.199493 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:52:56.199500 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:52:56.199507 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:52:56.199514 kernel: landlock: Up and running. Jan 23 23:52:56.199521 kernel: SELinux: Initializing. Jan 23 23:52:56.199528 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:52:56.199536 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:52:56.199544 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:52:56.199552 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:52:56.199559 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:52:56.199566 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:52:56.199573 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:52:56.199580 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:52:56.199587 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:52:56.199595 kernel: Remapping and enabling EFI services. Jan 23 23:52:56.199609 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:52:56.199617 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:52:56.199625 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:52:56.199632 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:52:56.199641 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:52:56.199648 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:52:56.199656 kernel: SMP: Total of 2 processors activated. Jan 23 23:52:56.199664 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:52:56.199672 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:52:56.199681 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:52:56.199689 kernel: CPU features: detected: CRC32 instructions Jan 23 23:52:56.199696 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:52:56.199704 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:52:56.199712 kernel: CPU features: detected: Privileged Access Never Jan 23 23:52:56.199719 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:52:56.199727 kernel: alternatives: applying system-wide alternatives Jan 23 23:52:56.199734 kernel: devtmpfs: initialized Jan 23 23:52:56.199742 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:52:56.199751 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:52:56.199759 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:52:56.199766 kernel: SMBIOS 3.1.0 present. Jan 23 23:52:56.199774 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:52:56.199781 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:52:56.199789 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:52:56.199797 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:52:56.199805 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:52:56.199812 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:52:56.199821 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:52:56.199829 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:52:56.199836 kernel: cpuidle: using governor menu Jan 23 23:52:56.199844 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:52:56.199852 kernel: ASID allocator initialised with 32768 entries Jan 23 23:52:56.202152 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:52:56.202165 kernel: Serial: AMBA PL011 UART driver Jan 23 23:52:56.202173 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:52:56.202180 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:52:56.202193 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:52:56.202201 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:52:56.202208 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:52:56.202216 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:52:56.202248 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:52:56.202275 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:52:56.202283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:52:56.202291 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:52:56.202298 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:52:56.202316 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:52:56.202323 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:52:56.202331 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:52:56.202338 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:52:56.202346 kernel: ACPI: Interpreter enabled Jan 23 23:52:56.202354 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:52:56.202362 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:52:56.202369 kernel: printk: console [ttyAMA0] enabled Jan 23 23:52:56.202377 kernel: printk: bootconsole [pl11] disabled Jan 23 23:52:56.202387 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:52:56.202394 kernel: iommu: Default domain type: Translated Jan 23 23:52:56.202402 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:52:56.202410 kernel: efivars: Registered efivars operations Jan 23 23:52:56.202422 kernel: vgaarb: loaded Jan 23 23:52:56.202430 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:52:56.202437 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:52:56.202444 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:52:56.202452 kernel: pnp: PnP ACPI init Jan 23 23:52:56.202460 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:52:56.202468 kernel: NET: Registered PF_INET protocol family Jan 23 23:52:56.202475 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:52:56.202483 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:52:56.202490 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:52:56.202498 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:52:56.202505 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:52:56.202512 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:52:56.202520 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:52:56.202529 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:52:56.202536 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:52:56.202544 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:52:56.202551 kernel: kvm [1]: HYP mode not available Jan 23 23:52:56.202558 kernel: Initialise system trusted keyrings Jan 23 23:52:56.202566 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:52:56.202573 kernel: Key type asymmetric registered Jan 23 23:52:56.202580 kernel: Asymmetric key parser 'x509' registered Jan 23 23:52:56.202588 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:52:56.202597 kernel: io scheduler mq-deadline registered Jan 23 23:52:56.202604 kernel: io scheduler kyber registered Jan 23 23:52:56.202612 kernel: io scheduler bfq registered Jan 23 23:52:56.202619 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:52:56.202627 kernel: thunder_xcv, ver 1.0 Jan 23 23:52:56.202634 kernel: thunder_bgx, ver 1.0 Jan 23 23:52:56.202641 kernel: nicpf, ver 1.0 Jan 23 23:52:56.202649 kernel: nicvf, ver 1.0 Jan 23 23:52:56.202825 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:52:56.202934 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:52:55 UTC (1769212375) Jan 23 23:52:56.202946 kernel: efifb: probing for efifb Jan 23 23:52:56.202955 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:52:56.202962 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:52:56.202970 kernel: efifb: scrolling: redraw Jan 23 23:52:56.202977 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:52:56.202985 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:52:56.202993 kernel: fb0: EFI VGA frame buffer device Jan 23 23:52:56.203002 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:52:56.203010 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:52:56.203018 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:52:56.203026 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:52:56.203034 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:52:56.203042 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:52:56.203050 kernel: Segment Routing with IPv6 Jan 23 23:52:56.203057 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:52:56.203064 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:52:56.203074 kernel: Key type dns_resolver registered Jan 23 23:52:56.203081 kernel: registered taskstats version 1 Jan 23 23:52:56.203089 kernel: Loading compiled-in X.509 certificates Jan 23 23:52:56.203097 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:52:56.203105 kernel: Key type .fscrypt registered Jan 23 23:52:56.203112 kernel: Key type fscrypt-provisioning registered Jan 23 23:52:56.203119 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:52:56.203127 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:52:56.203134 kernel: ima: No architecture policies found Jan 23 23:52:56.203143 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:52:56.203150 kernel: clk: Disabling unused clocks Jan 23 23:52:56.203158 kernel: Freeing unused kernel memory: 39424K Jan 23 23:52:56.203166 kernel: Run /init as init process Jan 23 23:52:56.203173 kernel: with arguments: Jan 23 23:52:56.203181 kernel: /init Jan 23 23:52:56.203188 kernel: with environment: Jan 23 23:52:56.203195 kernel: HOME=/ Jan 23 23:52:56.203208 kernel: TERM=linux Jan 23 23:52:56.203218 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:52:56.203229 systemd[1]: Detected virtualization microsoft. Jan 23 23:52:56.203237 systemd[1]: Detected architecture arm64. Jan 23 23:52:56.203245 systemd[1]: Running in initrd. Jan 23 23:52:56.203253 systemd[1]: No hostname configured, using default hostname. Jan 23 23:52:56.203261 systemd[1]: Hostname set to . Jan 23 23:52:56.203269 systemd[1]: Initializing machine ID from random generator. Jan 23 23:52:56.203278 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:52:56.203287 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:52:56.203295 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:52:56.203304 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:52:56.203312 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:52:56.203320 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:52:56.203328 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:52:56.203338 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:52:56.203348 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:52:56.203356 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:52:56.203364 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:52:56.203372 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:52:56.203380 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:52:56.203388 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:52:56.203396 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:52:56.203404 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:52:56.203413 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:52:56.203421 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:52:56.203430 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:52:56.203437 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:52:56.203446 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:52:56.203453 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:52:56.203462 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:52:56.203470 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:52:56.203480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:52:56.203488 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:52:56.203496 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:52:56.203504 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:52:56.203511 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:52:56.203540 systemd-journald[218]: Collecting audit messages is disabled. Jan 23 23:52:56.203562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:56.203570 systemd-journald[218]: Journal started Jan 23 23:52:56.203589 systemd-journald[218]: Runtime Journal (/run/log/journal/ada94f8b5842402cae042c6f829cfa76) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:52:56.204259 systemd-modules-load[219]: Inserted module 'overlay' Jan 23 23:52:56.220796 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:52:56.223887 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:52:56.244121 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:52:56.244145 kernel: Bridge firewalling registered Jan 23 23:52:56.241044 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 23 23:52:56.242229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:52:56.249883 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:52:56.257887 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:52:56.268295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:56.290240 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:56.302222 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:52:56.314219 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:52:56.335059 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:52:56.350907 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:52:56.367024 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:56.373776 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:52:56.395120 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:52:56.409036 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:52:56.416949 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:52:56.436664 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:52:56.448215 dracut-cmdline[249]: dracut-dracut-053 Jan 23 23:52:56.457043 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:52:56.454916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:52:56.498723 systemd-resolved[257]: Positive Trust Anchors: Jan 23 23:52:56.498741 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:52:56.498773 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:52:56.501027 systemd-resolved[257]: Defaulting to hostname 'linux'. Jan 23 23:52:56.502080 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:52:56.507351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:52:56.592879 kernel: SCSI subsystem initialized Jan 23 23:52:56.599904 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:52:56.610882 kernel: iscsi: registered transport (tcp) Jan 23 23:52:56.626930 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:52:56.626952 kernel: QLogic iSCSI HBA Driver Jan 23 23:52:56.660417 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:52:56.676003 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:52:56.703866 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:52:56.703927 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:52:56.708591 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:52:56.759888 kernel: raid6: neonx8 gen() 15815 MB/s Jan 23 23:52:56.775868 kernel: raid6: neonx4 gen() 15682 MB/s Jan 23 23:52:56.794875 kernel: raid6: neonx2 gen() 13299 MB/s Jan 23 23:52:56.814905 kernel: raid6: neonx1 gen() 10492 MB/s Jan 23 23:52:56.833889 kernel: raid6: int64x8 gen() 6974 MB/s Jan 23 23:52:56.852895 kernel: raid6: int64x4 gen() 7372 MB/s Jan 23 23:52:56.872879 kernel: raid6: int64x2 gen() 6145 MB/s Jan 23 23:52:56.894558 kernel: raid6: int64x1 gen() 5072 MB/s Jan 23 23:52:56.894572 kernel: raid6: using algorithm neonx8 gen() 15815 MB/s Jan 23 23:52:56.916914 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Jan 23 23:52:56.916926 kernel: raid6: using neon recovery algorithm Jan 23 23:52:56.927896 kernel: xor: measuring software checksum speed Jan 23 23:52:56.927920 kernel: 8regs : 19807 MB/sec Jan 23 23:52:56.930802 kernel: 32regs : 19660 MB/sec Jan 23 23:52:56.933596 kernel: arm64_neon : 26998 MB/sec Jan 23 23:52:56.937390 kernel: xor: using function: arm64_neon (26998 MB/sec) Jan 23 23:52:56.986867 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:52:56.997136 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:52:57.010986 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:52:57.031097 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 23 23:52:57.035286 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:52:57.051143 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:52:57.070701 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Jan 23 23:52:57.101649 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:52:57.116145 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:52:57.153185 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:52:57.170347 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:52:57.195408 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:52:57.206221 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:52:57.217737 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:52:57.230919 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:52:57.246042 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:52:57.263551 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:52:57.258900 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:52:57.269133 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:52:57.269350 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:57.295980 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:52:57.296003 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:52:57.283548 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:57.331437 kernel: scsi host1: storvsc_host_t Jan 23 23:52:57.331626 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 23 23:52:57.331638 kernel: scsi host0: storvsc_host_t Jan 23 23:52:57.331727 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:52:57.331809 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:52:57.331830 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:52:57.317850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:57.367491 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:52:57.367513 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:52:57.367547 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:52:57.367556 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:52:57.367566 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 23 23:52:57.318076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:57.363037 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:57.387208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:57.400176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:57.400903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:57.430245 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:57.447027 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:52:57.447205 kernel: PTP clock support registered Jan 23 23:52:57.447216 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:52:57.456594 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:52:57.456640 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:52:57.456651 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:52:57.460238 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:52:57.460281 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:52:57.460292 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:52:57.458303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:56.999933 kernel: hv_netvsc 7ced8dc0-449d-7ced-8dc0-449d7ced8dc0 eth0: VF slot 1 added Jan 23 23:52:57.008512 systemd-journald[218]: Time jumped backwards, rotating. Jan 23 23:52:57.008571 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:52:56.977555 systemd-resolved[257]: Clock change detected. Flushing caches. Jan 23 23:52:57.020546 kernel: hv_pci 5d859841-b3c4-4971-a075-34903012b43b: PCI VMBus probing: Using version 0x10004 Jan 23 23:52:57.025777 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:52:57.026239 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:52:57.026359 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:52:57.026443 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:52:57.026543 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:52:57.033573 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:57.063415 kernel: hv_pci 5d859841-b3c4-4971-a075-34903012b43b: PCI host bridge to bus b3c4:00 Jan 23 23:52:57.063580 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:52:57.063675 kernel: pci_bus b3c4:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:52:57.068607 kernel: pci_bus b3c4:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:52:57.072597 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:57.076373 kernel: pci b3c4:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:52:57.076417 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:52:57.084867 kernel: pci b3c4:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:52:57.093626 kernel: pci b3c4:00:02.0: enabling Extended Tags Jan 23 23:52:57.109602 kernel: pci b3c4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b3c4:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:52:57.113800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:57.132933 kernel: pci_bus b3c4:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:52:57.133146 kernel: pci b3c4:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:52:57.145562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#230 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:52:57.180672 kernel: mlx5_core b3c4:00:02.0: enabling device (0000 -> 0002) Jan 23 23:52:57.186540 kernel: mlx5_core b3c4:00:02.0: firmware version: 16.30.5026 Jan 23 23:52:57.375554 kernel: hv_netvsc 7ced8dc0-449d-7ced-8dc0-449d7ced8dc0 eth0: VF registering: eth1 Jan 23 23:52:57.375744 kernel: mlx5_core b3c4:00:02.0 eth1: joined to eth0 Jan 23 23:52:57.383686 kernel: mlx5_core b3c4:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:52:57.394546 kernel: mlx5_core b3c4:00:02.0 enP46020s1: renamed from eth1 Jan 23 23:52:57.967545 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (488) Jan 23 23:52:57.982896 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:52:58.002146 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (485) Jan 23 23:52:57.997799 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:52:58.019406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:52:58.029787 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:52:58.048682 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:52:58.059362 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:52:59.085549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:59.085873 disk-uuid[604]: The operation has completed successfully. Jan 23 23:52:59.154165 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:52:59.154278 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:52:59.182678 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:52:59.193674 sh[720]: Success Jan 23 23:52:59.223061 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:52:59.809158 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:52:59.824688 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:52:59.830556 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:52:59.863792 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:52:59.863839 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:59.869537 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:52:59.873782 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:52:59.878314 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:53:00.254633 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:53:00.258427 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:53:00.274788 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:53:00.281729 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:53:00.307166 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:53:00.307205 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:53:00.310686 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:53:00.347862 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:53:00.361379 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:53:00.365690 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:53:00.373969 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:53:00.383864 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:53:00.415154 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:53:00.431655 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:53:00.457599 systemd-networkd[904]: lo: Link UP Jan 23 23:53:00.457607 systemd-networkd[904]: lo: Gained carrier Jan 23 23:53:00.459134 systemd-networkd[904]: Enumeration completed Jan 23 23:53:00.459734 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:53:00.459737 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:53:00.464646 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:53:00.469277 systemd[1]: Reached target network.target - Network. Jan 23 23:53:00.521544 kernel: mlx5_core b3c4:00:02.0 enP46020s1: Link up Jan 23 23:53:00.558543 kernel: hv_netvsc 7ced8dc0-449d-7ced-8dc0-449d7ced8dc0 eth0: Data path switched to VF: enP46020s1 Jan 23 23:53:00.558705 systemd-networkd[904]: enP46020s1: Link UP Jan 23 23:53:00.558790 systemd-networkd[904]: eth0: Link UP Jan 23 23:53:00.558887 systemd-networkd[904]: eth0: Gained carrier Jan 23 23:53:00.558896 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:53:00.577747 systemd-networkd[904]: enP46020s1: Gained carrier Jan 23 23:53:00.589571 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:53:01.484349 ignition[873]: Ignition 2.19.0 Jan 23 23:53:01.484361 ignition[873]: Stage: fetch-offline Jan 23 23:53:01.488192 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:53:01.484402 ignition[873]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:01.484416 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:01.484519 ignition[873]: parsed url from cmdline: "" Jan 23 23:53:01.484522 ignition[873]: no config URL provided Jan 23 23:53:01.484537 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:53:01.510680 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:53:01.484545 ignition[873]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:53:01.484553 ignition[873]: failed to fetch config: resource requires networking Jan 23 23:53:01.484786 ignition[873]: Ignition finished successfully Jan 23 23:53:01.526016 ignition[918]: Ignition 2.19.0 Jan 23 23:53:01.526023 ignition[918]: Stage: fetch Jan 23 23:53:01.526247 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:01.526256 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:01.526352 ignition[918]: parsed url from cmdline: "" Jan 23 23:53:01.526355 ignition[918]: no config URL provided Jan 23 23:53:01.526359 ignition[918]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:53:01.526370 ignition[918]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:53:01.526391 ignition[918]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:53:01.633668 ignition[918]: GET result: OK Jan 23 23:53:01.633712 ignition[918]: config has been read from IMDS userdata Jan 23 23:53:01.633732 ignition[918]: parsing config with SHA512: 22b44fb793cf3b1af82aa83369ab04be2b6899db5b2a3ea7b8c91ca652259e3f33509f5ba48b6e15b48f80592eeac3a34e76009e1f3a7c65fcfb0b10dbd848fa Jan 23 23:53:01.636670 unknown[918]: fetched base config from "system" Jan 23 23:53:01.637023 ignition[918]: fetch: fetch complete Jan 23 23:53:01.636677 unknown[918]: fetched base config from "system" Jan 23 23:53:01.637029 ignition[918]: fetch: fetch passed Jan 23 23:53:01.636683 unknown[918]: fetched user config from "azure" Jan 23 23:53:01.637084 ignition[918]: Ignition finished successfully Jan 23 23:53:01.641600 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:53:01.660711 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:53:01.676693 ignition[925]: Ignition 2.19.0 Jan 23 23:53:01.676700 ignition[925]: Stage: kargs Jan 23 23:53:01.676930 ignition[925]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:01.682285 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:53:01.676942 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:01.678216 ignition[925]: kargs: kargs passed Jan 23 23:53:01.678261 ignition[925]: Ignition finished successfully Jan 23 23:53:01.703814 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:53:01.723891 ignition[931]: Ignition 2.19.0 Jan 23 23:53:01.723903 ignition[931]: Stage: disks Jan 23 23:53:01.724126 ignition[931]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:01.729001 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:53:01.724136 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:01.735264 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:53:01.725150 ignition[931]: disks: disks passed Jan 23 23:53:01.743351 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:53:01.725198 ignition[931]: Ignition finished successfully Jan 23 23:53:01.752524 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:53:01.761119 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:53:01.770633 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:53:01.791791 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:53:01.860632 systemd-fsck[939]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:53:01.870273 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:53:01.887755 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:53:01.945574 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:53:01.946343 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:53:01.950172 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:53:01.993603 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:53:02.005711 systemd-networkd[904]: eth0: Gained IPv6LL Jan 23 23:53:02.020726 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (950) Jan 23 23:53:02.020764 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:53:02.025681 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:53:02.028972 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:53:02.031725 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:53:02.042154 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:53:02.043418 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:53:02.053970 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:53:02.059611 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:53:02.069822 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:53:02.076432 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:53:02.091828 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:53:02.669059 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:53:02.691306 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:53:02.714169 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:53:02.724109 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:53:02.736716 coreos-metadata[967]: Jan 23 23:53:02.736 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:53:02.746212 coreos-metadata[967]: Jan 23 23:53:02.746 INFO Fetch successful Jan 23 23:53:02.746212 coreos-metadata[967]: Jan 23 23:53:02.746 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:53:02.759928 coreos-metadata[967]: Jan 23 23:53:02.759 INFO Fetch successful Jan 23 23:53:02.759928 coreos-metadata[967]: Jan 23 23:53:02.759 INFO wrote hostname ci-4081.3.6-n-9dffd30f3c to /sysroot/etc/hostname Jan 23 23:53:02.765459 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:53:03.803152 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:53:03.818002 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:53:03.826692 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:53:03.842618 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:53:03.838318 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:53:03.869749 ignition[1068]: INFO : Ignition 2.19.0 Jan 23 23:53:03.874386 ignition[1068]: INFO : Stage: mount Jan 23 23:53:03.874386 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:03.874386 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:03.874386 ignition[1068]: INFO : mount: mount passed Jan 23 23:53:03.874386 ignition[1068]: INFO : Ignition finished successfully Jan 23 23:53:03.876553 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:53:03.881629 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:53:03.903628 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:53:03.915853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:53:03.947545 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1079) Jan 23 23:53:03.958544 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:53:03.958593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:53:03.958604 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:53:03.968549 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:53:03.970098 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:53:03.995215 ignition[1096]: INFO : Ignition 2.19.0 Jan 23 23:53:03.995215 ignition[1096]: INFO : Stage: files Jan 23 23:53:04.001621 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:04.001621 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:04.001621 ignition[1096]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:53:04.016734 ignition[1096]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:53:04.016734 ignition[1096]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:53:04.114043 ignition[1096]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:53:04.120296 ignition[1096]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:53:04.120296 ignition[1096]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:53:04.115023 unknown[1096]: wrote ssh authorized keys file for user: core Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:53:04.682181 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 23 23:53:04.942844 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:53:04.942844 ignition[1096]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: files passed Jan 23 23:53:04.958166 ignition[1096]: INFO : Ignition finished successfully Jan 23 23:53:04.953317 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:53:04.986762 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:53:05.005035 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:53:05.016905 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:53:05.053251 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:53:05.053251 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:53:05.022813 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:53:05.073607 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:53:05.030294 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:53:05.039783 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:53:05.066813 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:53:05.103518 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:53:05.103650 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:53:05.113233 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:53:05.122877 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:53:05.131570 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:53:05.144027 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:53:05.159691 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:53:05.173076 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:53:05.186400 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:53:05.191424 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:53:05.200983 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:53:05.209545 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:53:05.209667 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:53:05.221959 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:53:05.226308 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:53:05.234864 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:53:05.243398 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:53:05.252416 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:53:05.261451 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:53:05.270482 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:53:05.280719 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:53:05.289670 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:53:05.299729 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:53:05.307724 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:53:05.307841 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:53:05.320076 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:53:05.324754 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:53:05.333697 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:53:05.333800 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:53:05.343057 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:53:05.343171 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:53:05.356124 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:53:05.356237 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:53:05.361448 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:53:05.361543 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:53:05.369342 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:53:05.425196 ignition[1149]: INFO : Ignition 2.19.0 Jan 23 23:53:05.425196 ignition[1149]: INFO : Stage: umount Jan 23 23:53:05.425196 ignition[1149]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:05.425196 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:05.425196 ignition[1149]: INFO : umount: umount passed Jan 23 23:53:05.425196 ignition[1149]: INFO : Ignition finished successfully Jan 23 23:53:05.369430 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:53:05.394857 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:53:05.415199 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:53:05.426818 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:53:05.426993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:53:05.432202 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:53:05.432297 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:53:05.450437 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:53:05.451293 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:53:05.451406 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:53:05.465312 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:53:05.465639 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:53:05.473873 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:53:05.473926 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:53:05.483675 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:53:05.483721 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:53:05.491862 systemd[1]: Stopped target network.target - Network. Jan 23 23:53:05.500328 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:53:05.500375 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:53:05.509729 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:53:05.519209 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:53:05.527573 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:53:05.533148 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:53:05.540548 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:53:05.549101 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:53:05.549145 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:53:05.557565 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:53:05.557599 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:53:05.566154 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:53:05.566204 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:53:05.574131 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:53:05.574170 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:53:05.582770 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:53:05.591230 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:53:05.602148 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:53:05.602239 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:53:05.610866 systemd-networkd[904]: eth0: DHCPv6 lease lost Jan 23 23:53:05.615369 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:53:05.615491 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:53:05.626470 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:53:05.626538 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:53:05.651018 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:53:05.800096 kernel: hv_netvsc 7ced8dc0-449d-7ced-8dc0-449d7ced8dc0 eth0: Data path switched from VF: enP46020s1 Jan 23 23:53:05.659029 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:53:05.659102 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:53:05.668232 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:53:05.682401 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:53:05.683509 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:53:05.695737 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:53:05.695907 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:53:05.721104 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:53:05.721174 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:53:05.726260 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:53:05.726301 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:53:05.735574 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:53:05.735629 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:53:05.748219 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:53:05.748277 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:53:05.757542 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:53:05.757589 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:53:05.784733 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:53:05.795116 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:53:05.795203 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:53:05.804686 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:53:05.804743 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:53:05.814147 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:53:05.814205 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:53:05.819547 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 23:53:05.819592 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:53:05.835561 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:53:05.835614 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:53:05.845964 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:53:05.846008 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:53:05.855623 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:53:05.855668 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:53:05.865524 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:53:05.865672 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:53:05.874637 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:53:05.874714 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:53:05.884165 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:53:05.884259 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:53:05.893579 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:53:05.893701 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:53:05.903786 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:53:05.929869 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:53:06.159757 systemd[1]: Switching root. Jan 23 23:53:06.187600 systemd-journald[218]: Journal stopped Jan 23 23:52:56.197721 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:52:56.197745 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:52:56.197753 kernel: KASLR enabled Jan 23 23:52:56.197759 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:52:56.197767 kernel: printk: bootconsole [pl11] enabled Jan 23 23:52:56.197772 kernel: efi: EFI v2.7 by EDK II Jan 23 23:52:56.197780 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:52:56.197786 kernel: random: crng init done Jan 23 23:52:56.197792 kernel: ACPI: Early table checksum verification disabled Jan 23 23:52:56.197798 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:52:56.197804 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197811 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197818 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:52:56.197825 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197832 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197838 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197845 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.197853 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.198889 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.198902 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:52:56.198909 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:56.198916 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:52:56.198923 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:52:56.198929 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:52:56.198936 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:52:56.198942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:52:56.198949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:52:56.198955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:52:56.198967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:52:56.198974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:52:56.198980 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:52:56.198986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:52:56.198993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:52:56.198999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:52:56.199005 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 23 23:52:56.199011 kernel: Zone ranges: Jan 23 23:52:56.199018 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:52:56.199024 kernel: DMA32 empty Jan 23 23:52:56.199031 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:52:56.199037 kernel: Movable zone start for each node Jan 23 23:52:56.199048 kernel: Early memory node ranges Jan 23 23:52:56.199055 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:52:56.199062 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:52:56.199069 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:52:56.199075 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:52:56.199083 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:52:56.199090 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:52:56.199097 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:52:56.199104 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:52:56.199111 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:52:56.199118 kernel: psci: probing for conduit method from ACPI. Jan 23 23:52:56.199125 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:52:56.199132 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:52:56.199138 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:52:56.199145 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:52:56.199152 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:52:56.199159 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:52:56.199167 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:52:56.199174 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:52:56.199181 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:52:56.199187 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:52:56.199194 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:52:56.199201 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:52:56.199208 kernel: CPU features: detected: Spectre-BHB Jan 23 23:52:56.199214 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:52:56.199221 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:52:56.199228 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:52:56.199235 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:52:56.199243 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:52:56.199250 kernel: alternatives: applying boot alternatives Jan 23 23:52:56.199259 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:52:56.199266 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:52:56.199273 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:52:56.199280 kernel: Fallback order for Node 0: 0 Jan 23 23:52:56.199286 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:52:56.199293 kernel: Policy zone: Normal Jan 23 23:52:56.199300 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:52:56.199307 kernel: software IO TLB: area num 2. Jan 23 23:52:56.199313 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:52:56.199322 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 23 23:52:56.199329 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:52:56.199336 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:52:56.199344 kernel: rcu: RCU event tracing is enabled. Jan 23 23:52:56.199351 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:52:56.199358 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:52:56.199365 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:52:56.199373 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:52:56.199379 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:52:56.199386 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:52:56.199393 kernel: GICv3: 960 SPIs implemented Jan 23 23:52:56.199401 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:52:56.199408 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:52:56.199414 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:52:56.199421 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:52:56.199428 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:52:56.199435 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:52:56.199442 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:52:56.199449 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:52:56.199456 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:52:56.199462 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:52:56.199470 kernel: Console: colour dummy device 80x25 Jan 23 23:52:56.199478 kernel: printk: console [tty1] enabled Jan 23 23:52:56.199486 kernel: ACPI: Core revision 20230628 Jan 23 23:52:56.199493 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:52:56.199500 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:52:56.199507 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:52:56.199514 kernel: landlock: Up and running. Jan 23 23:52:56.199521 kernel: SELinux: Initializing. Jan 23 23:52:56.199528 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:52:56.199536 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:52:56.199544 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:52:56.199552 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:52:56.199559 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:52:56.199566 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:52:56.199573 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:52:56.199580 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:52:56.199587 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:52:56.199595 kernel: Remapping and enabling EFI services. Jan 23 23:52:56.199609 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:52:56.199617 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:52:56.199625 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:52:56.199632 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:52:56.199641 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:52:56.199648 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:52:56.199656 kernel: SMP: Total of 2 processors activated. Jan 23 23:52:56.199664 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:52:56.199672 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:52:56.199681 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:52:56.199689 kernel: CPU features: detected: CRC32 instructions Jan 23 23:52:56.199696 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:52:56.199704 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:52:56.199712 kernel: CPU features: detected: Privileged Access Never Jan 23 23:52:56.199719 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:52:56.199727 kernel: alternatives: applying system-wide alternatives Jan 23 23:52:56.199734 kernel: devtmpfs: initialized Jan 23 23:52:56.199742 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:52:56.199751 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:52:56.199759 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:52:56.199766 kernel: SMBIOS 3.1.0 present. Jan 23 23:52:56.199774 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:52:56.199781 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:52:56.199789 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:52:56.199797 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:52:56.199805 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:52:56.199812 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:52:56.199821 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:52:56.199829 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:52:56.199836 kernel: cpuidle: using governor menu Jan 23 23:52:56.199844 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:52:56.199852 kernel: ASID allocator initialised with 32768 entries Jan 23 23:52:56.202152 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:52:56.202165 kernel: Serial: AMBA PL011 UART driver Jan 23 23:52:56.202173 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:52:56.202180 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:52:56.202193 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:52:56.202201 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:52:56.202208 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:52:56.202216 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:52:56.202248 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:52:56.202275 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:52:56.202283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:52:56.202291 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:52:56.202298 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:52:56.202316 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:52:56.202323 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:52:56.202331 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:52:56.202338 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:52:56.202346 kernel: ACPI: Interpreter enabled Jan 23 23:52:56.202354 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:52:56.202362 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:52:56.202369 kernel: printk: console [ttyAMA0] enabled Jan 23 23:52:56.202377 kernel: printk: bootconsole [pl11] disabled Jan 23 23:52:56.202387 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:52:56.202394 kernel: iommu: Default domain type: Translated Jan 23 23:52:56.202402 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:52:56.202410 kernel: efivars: Registered efivars operations Jan 23 23:52:56.202422 kernel: vgaarb: loaded Jan 23 23:52:56.202430 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:52:56.202437 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:52:56.202444 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:52:56.202452 kernel: pnp: PnP ACPI init Jan 23 23:52:56.202460 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:52:56.202468 kernel: NET: Registered PF_INET protocol family Jan 23 23:52:56.202475 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:52:56.202483 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:52:56.202490 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:52:56.202498 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:52:56.202505 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:52:56.202512 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:52:56.202520 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:52:56.202529 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:52:56.202536 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:52:56.202544 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:52:56.202551 kernel: kvm [1]: HYP mode not available Jan 23 23:52:56.202558 kernel: Initialise system trusted keyrings Jan 23 23:52:56.202566 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:52:56.202573 kernel: Key type asymmetric registered Jan 23 23:52:56.202580 kernel: Asymmetric key parser 'x509' registered Jan 23 23:52:56.202588 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:52:56.202597 kernel: io scheduler mq-deadline registered Jan 23 23:52:56.202604 kernel: io scheduler kyber registered Jan 23 23:52:56.202612 kernel: io scheduler bfq registered Jan 23 23:52:56.202619 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:52:56.202627 kernel: thunder_xcv, ver 1.0 Jan 23 23:52:56.202634 kernel: thunder_bgx, ver 1.0 Jan 23 23:52:56.202641 kernel: nicpf, ver 1.0 Jan 23 23:52:56.202649 kernel: nicvf, ver 1.0 Jan 23 23:52:56.202825 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:52:56.202934 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:52:55 UTC (1769212375) Jan 23 23:52:56.202946 kernel: efifb: probing for efifb Jan 23 23:52:56.202955 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:52:56.202962 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:52:56.202970 kernel: efifb: scrolling: redraw Jan 23 23:52:56.202977 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:52:56.202985 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:52:56.202993 kernel: fb0: EFI VGA frame buffer device Jan 23 23:52:56.203002 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:52:56.203010 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:52:56.203018 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:52:56.203026 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:52:56.203034 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:52:56.203042 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:52:56.203050 kernel: Segment Routing with IPv6 Jan 23 23:52:56.203057 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:52:56.203064 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:52:56.203074 kernel: Key type dns_resolver registered Jan 23 23:52:56.203081 kernel: registered taskstats version 1 Jan 23 23:52:56.203089 kernel: Loading compiled-in X.509 certificates Jan 23 23:52:56.203097 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:52:56.203105 kernel: Key type .fscrypt registered Jan 23 23:52:56.203112 kernel: Key type fscrypt-provisioning registered Jan 23 23:52:56.203119 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:52:56.203127 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:52:56.203134 kernel: ima: No architecture policies found Jan 23 23:52:56.203143 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:52:56.203150 kernel: clk: Disabling unused clocks Jan 23 23:52:56.203158 kernel: Freeing unused kernel memory: 39424K Jan 23 23:52:56.203166 kernel: Run /init as init process Jan 23 23:52:56.203173 kernel: with arguments: Jan 23 23:52:56.203181 kernel: /init Jan 23 23:52:56.203188 kernel: with environment: Jan 23 23:52:56.203195 kernel: HOME=/ Jan 23 23:52:56.203208 kernel: TERM=linux Jan 23 23:52:56.203218 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:52:56.203229 systemd[1]: Detected virtualization microsoft. Jan 23 23:52:56.203237 systemd[1]: Detected architecture arm64. Jan 23 23:52:56.203245 systemd[1]: Running in initrd. Jan 23 23:52:56.203253 systemd[1]: No hostname configured, using default hostname. Jan 23 23:52:56.203261 systemd[1]: Hostname set to . Jan 23 23:52:56.203269 systemd[1]: Initializing machine ID from random generator. Jan 23 23:52:56.203278 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:52:56.203287 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:52:56.203295 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:52:56.203304 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:52:56.203312 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:52:56.203320 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:52:56.203328 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:52:56.203338 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:52:56.203348 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:52:56.203356 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:52:56.203364 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:52:56.203372 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:52:56.203380 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:52:56.203388 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:52:56.203396 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:52:56.203404 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:52:56.203413 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:52:56.203421 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:52:56.203430 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:52:56.203437 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:52:56.203446 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:52:56.203453 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:52:56.203462 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:52:56.203470 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:52:56.203480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:52:56.203488 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:52:56.203496 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:52:56.203504 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:52:56.203511 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:52:56.203540 systemd-journald[218]: Collecting audit messages is disabled. Jan 23 23:52:56.203562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:56.203570 systemd-journald[218]: Journal started Jan 23 23:52:56.203589 systemd-journald[218]: Runtime Journal (/run/log/journal/ada94f8b5842402cae042c6f829cfa76) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:52:56.204259 systemd-modules-load[219]: Inserted module 'overlay' Jan 23 23:52:56.220796 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:52:56.223887 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:52:56.244121 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:52:56.244145 kernel: Bridge firewalling registered Jan 23 23:52:56.241044 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 23 23:52:56.242229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:52:56.249883 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:52:56.257887 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:52:56.268295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:56.290240 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:56.302222 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:52:56.314219 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:52:56.335059 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:52:56.350907 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:52:56.367024 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:56.373776 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:52:56.395120 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:52:56.409036 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:52:56.416949 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:52:56.436664 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:52:56.448215 dracut-cmdline[249]: dracut-dracut-053 Jan 23 23:52:56.457043 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:52:56.454916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:52:56.498723 systemd-resolved[257]: Positive Trust Anchors: Jan 23 23:52:56.498741 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:52:56.498773 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:52:56.501027 systemd-resolved[257]: Defaulting to hostname 'linux'. Jan 23 23:52:56.502080 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:52:56.507351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:52:56.592879 kernel: SCSI subsystem initialized Jan 23 23:52:56.599904 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:52:56.610882 kernel: iscsi: registered transport (tcp) Jan 23 23:52:56.626930 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:52:56.626952 kernel: QLogic iSCSI HBA Driver Jan 23 23:52:56.660417 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:52:56.676003 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:52:56.703866 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:52:56.703927 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:52:56.708591 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:52:56.759888 kernel: raid6: neonx8 gen() 15815 MB/s Jan 23 23:52:56.775868 kernel: raid6: neonx4 gen() 15682 MB/s Jan 23 23:52:56.794875 kernel: raid6: neonx2 gen() 13299 MB/s Jan 23 23:52:56.814905 kernel: raid6: neonx1 gen() 10492 MB/s Jan 23 23:52:56.833889 kernel: raid6: int64x8 gen() 6974 MB/s Jan 23 23:52:56.852895 kernel: raid6: int64x4 gen() 7372 MB/s Jan 23 23:52:56.872879 kernel: raid6: int64x2 gen() 6145 MB/s Jan 23 23:52:56.894558 kernel: raid6: int64x1 gen() 5072 MB/s Jan 23 23:52:56.894572 kernel: raid6: using algorithm neonx8 gen() 15815 MB/s Jan 23 23:52:56.916914 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Jan 23 23:52:56.916926 kernel: raid6: using neon recovery algorithm Jan 23 23:52:56.927896 kernel: xor: measuring software checksum speed Jan 23 23:52:56.927920 kernel: 8regs : 19807 MB/sec Jan 23 23:52:56.930802 kernel: 32regs : 19660 MB/sec Jan 23 23:52:56.933596 kernel: arm64_neon : 26998 MB/sec Jan 23 23:52:56.937390 kernel: xor: using function: arm64_neon (26998 MB/sec) Jan 23 23:52:56.986867 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:52:56.997136 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:52:57.010986 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:52:57.031097 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 23 23:52:57.035286 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:52:57.051143 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:52:57.070701 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Jan 23 23:52:57.101649 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:52:57.116145 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:52:57.153185 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:52:57.170347 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:52:57.195408 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:52:57.206221 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:52:57.217737 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:52:57.230919 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:52:57.246042 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:52:57.263551 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:52:57.258900 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:52:57.269133 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:52:57.269350 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:57.295980 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:52:57.296003 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:52:57.283548 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:57.331437 kernel: scsi host1: storvsc_host_t Jan 23 23:52:57.331626 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 23 23:52:57.331638 kernel: scsi host0: storvsc_host_t Jan 23 23:52:57.331727 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:52:57.331809 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:52:57.331830 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:52:57.317850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:57.367491 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:52:57.367513 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:52:57.367547 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:52:57.367556 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:52:57.367566 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 23 23:52:57.318076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:57.363037 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:57.387208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:57.400176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:57.400903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:57.430245 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:57.447027 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:52:57.447205 kernel: PTP clock support registered Jan 23 23:52:57.447216 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:52:57.456594 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:52:57.456640 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:52:57.456651 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:52:57.460238 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:52:57.460281 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:52:57.460292 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:52:57.458303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:56.999933 kernel: hv_netvsc 7ced8dc0-449d-7ced-8dc0-449d7ced8dc0 eth0: VF slot 1 added Jan 23 23:52:57.008512 systemd-journald[218]: Time jumped backwards, rotating. Jan 23 23:52:57.008571 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:52:56.977555 systemd-resolved[257]: Clock change detected. Flushing caches. Jan 23 23:52:57.020546 kernel: hv_pci 5d859841-b3c4-4971-a075-34903012b43b: PCI VMBus probing: Using version 0x10004 Jan 23 23:52:57.025777 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:52:57.026239 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:52:57.026359 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:52:57.026443 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:52:57.026543 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:52:57.033573 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:57.063415 kernel: hv_pci 5d859841-b3c4-4971-a075-34903012b43b: PCI host bridge to bus b3c4:00 Jan 23 23:52:57.063580 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:52:57.063675 kernel: pci_bus b3c4:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:52:57.068607 kernel: pci_bus b3c4:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:52:57.072597 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:57.076373 kernel: pci b3c4:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:52:57.076417 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:52:57.084867 kernel: pci b3c4:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:52:57.093626 kernel: pci b3c4:00:02.0: enabling Extended Tags Jan 23 23:52:57.109602 kernel: pci b3c4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b3c4:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:52:57.113800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:57.132933 kernel: pci_bus b3c4:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:52:57.133146 kernel: pci b3c4:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:52:57.145562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#230 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:52:57.180672 kernel: mlx5_core b3c4:00:02.0: enabling device (0000 -> 0002) Jan 23 23:52:57.186540 kernel: mlx5_core b3c4:00:02.0: firmware version: 16.30.5026 Jan 23 23:52:57.375554 kernel: hv_netvsc 7ced8dc0-449d-7ced-8dc0-449d7ced8dc0 eth0: VF registering: eth1 Jan 23 23:52:57.375744 kernel: mlx5_core b3c4:00:02.0 eth1: joined to eth0 Jan 23 23:52:57.383686 kernel: mlx5_core b3c4:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:52:57.394546 kernel: mlx5_core b3c4:00:02.0 enP46020s1: renamed from eth1 Jan 23 23:52:57.967545 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (488) Jan 23 23:52:57.982896 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:52:58.002146 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (485) Jan 23 23:52:57.997799 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:52:58.019406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:52:58.029787 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:52:58.048682 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:52:58.059362 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:52:59.085549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:59.085873 disk-uuid[604]: The operation has completed successfully. Jan 23 23:52:59.154165 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:52:59.154278 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:52:59.182678 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:52:59.193674 sh[720]: Success Jan 23 23:52:59.223061 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:52:59.809158 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:52:59.824688 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:52:59.830556 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:52:59.863792 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:52:59.863839 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:59.869537 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:52:59.873782 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:52:59.878314 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:53:00.254633 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:53:00.258427 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:53:00.274788 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:53:00.281729 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:53:00.307166 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:53:00.307205 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:53:00.310686 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:53:00.347862 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:53:00.361379 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:53:00.365690 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:53:00.373969 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:53:00.383864 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:53:00.415154 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:53:00.431655 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:53:00.457599 systemd-networkd[904]: lo: Link UP Jan 23 23:53:00.457607 systemd-networkd[904]: lo: Gained carrier Jan 23 23:53:00.459134 systemd-networkd[904]: Enumeration completed Jan 23 23:53:00.459734 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:53:00.459737 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:53:00.464646 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:53:00.469277 systemd[1]: Reached target network.target - Network. Jan 23 23:53:00.521544 kernel: mlx5_core b3c4:00:02.0 enP46020s1: Link up Jan 23 23:53:00.558543 kernel: hv_netvsc 7ced8dc0-449d-7ced-8dc0-449d7ced8dc0 eth0: Data path switched to VF: enP46020s1 Jan 23 23:53:00.558705 systemd-networkd[904]: enP46020s1: Link UP Jan 23 23:53:00.558790 systemd-networkd[904]: eth0: Link UP Jan 23 23:53:00.558887 systemd-networkd[904]: eth0: Gained carrier Jan 23 23:53:00.558896 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:53:00.577747 systemd-networkd[904]: enP46020s1: Gained carrier Jan 23 23:53:00.589571 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:53:01.484349 ignition[873]: Ignition 2.19.0 Jan 23 23:53:01.484361 ignition[873]: Stage: fetch-offline Jan 23 23:53:01.488192 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:53:01.484402 ignition[873]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:01.484416 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:01.484519 ignition[873]: parsed url from cmdline: "" Jan 23 23:53:01.484522 ignition[873]: no config URL provided Jan 23 23:53:01.484537 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:53:01.510680 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:53:01.484545 ignition[873]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:53:01.484553 ignition[873]: failed to fetch config: resource requires networking Jan 23 23:53:01.484786 ignition[873]: Ignition finished successfully Jan 23 23:53:01.526016 ignition[918]: Ignition 2.19.0 Jan 23 23:53:01.526023 ignition[918]: Stage: fetch Jan 23 23:53:01.526247 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:01.526256 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:01.526352 ignition[918]: parsed url from cmdline: "" Jan 23 23:53:01.526355 ignition[918]: no config URL provided Jan 23 23:53:01.526359 ignition[918]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:53:01.526370 ignition[918]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:53:01.526391 ignition[918]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:53:01.633668 ignition[918]: GET result: OK Jan 23 23:53:01.633712 ignition[918]: config has been read from IMDS userdata Jan 23 23:53:01.633732 ignition[918]: parsing config with SHA512: 22b44fb793cf3b1af82aa83369ab04be2b6899db5b2a3ea7b8c91ca652259e3f33509f5ba48b6e15b48f80592eeac3a34e76009e1f3a7c65fcfb0b10dbd848fa Jan 23 23:53:01.636670 unknown[918]: fetched base config from "system" Jan 23 23:53:01.637023 ignition[918]: fetch: fetch complete Jan 23 23:53:01.636677 unknown[918]: fetched base config from "system" Jan 23 23:53:01.637029 ignition[918]: fetch: fetch passed Jan 23 23:53:01.636683 unknown[918]: fetched user config from "azure" Jan 23 23:53:01.637084 ignition[918]: Ignition finished successfully Jan 23 23:53:01.641600 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:53:01.660711 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:53:01.676693 ignition[925]: Ignition 2.19.0 Jan 23 23:53:01.676700 ignition[925]: Stage: kargs Jan 23 23:53:01.676930 ignition[925]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:01.682285 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:53:01.676942 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:01.678216 ignition[925]: kargs: kargs passed Jan 23 23:53:01.678261 ignition[925]: Ignition finished successfully Jan 23 23:53:01.703814 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:53:01.723891 ignition[931]: Ignition 2.19.0 Jan 23 23:53:01.723903 ignition[931]: Stage: disks Jan 23 23:53:01.724126 ignition[931]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:01.729001 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:53:01.724136 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:01.735264 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:53:01.725150 ignition[931]: disks: disks passed Jan 23 23:53:01.743351 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:53:01.725198 ignition[931]: Ignition finished successfully Jan 23 23:53:01.752524 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:53:01.761119 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:53:01.770633 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:53:01.791791 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:53:01.860632 systemd-fsck[939]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:53:01.870273 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:53:01.887755 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:53:01.945574 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:53:01.946343 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:53:01.950172 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:53:01.993603 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:53:02.005711 systemd-networkd[904]: eth0: Gained IPv6LL Jan 23 23:53:02.020726 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (950) Jan 23 23:53:02.020764 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:53:02.025681 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:53:02.028972 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:53:02.031725 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:53:02.042154 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:53:02.043418 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:53:02.053970 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:53:02.059611 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:53:02.069822 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:53:02.076432 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:53:02.091828 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:53:02.669059 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:53:02.691306 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:53:02.714169 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:53:02.724109 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:53:02.736716 coreos-metadata[967]: Jan 23 23:53:02.736 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:53:02.746212 coreos-metadata[967]: Jan 23 23:53:02.746 INFO Fetch successful Jan 23 23:53:02.746212 coreos-metadata[967]: Jan 23 23:53:02.746 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:53:02.759928 coreos-metadata[967]: Jan 23 23:53:02.759 INFO Fetch successful Jan 23 23:53:02.759928 coreos-metadata[967]: Jan 23 23:53:02.759 INFO wrote hostname ci-4081.3.6-n-9dffd30f3c to /sysroot/etc/hostname Jan 23 23:53:02.765459 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:53:03.803152 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:53:03.818002 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:53:03.826692 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:53:03.842618 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:53:03.838318 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:53:03.869749 ignition[1068]: INFO : Ignition 2.19.0 Jan 23 23:53:03.874386 ignition[1068]: INFO : Stage: mount Jan 23 23:53:03.874386 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:03.874386 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:03.874386 ignition[1068]: INFO : mount: mount passed Jan 23 23:53:03.874386 ignition[1068]: INFO : Ignition finished successfully Jan 23 23:53:03.876553 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:53:03.881629 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:53:03.903628 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:53:03.915853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:53:03.947545 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1079) Jan 23 23:53:03.958544 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:53:03.958593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:53:03.958604 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:53:03.968549 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:53:03.970098 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:53:03.995215 ignition[1096]: INFO : Ignition 2.19.0 Jan 23 23:53:03.995215 ignition[1096]: INFO : Stage: files Jan 23 23:53:04.001621 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:04.001621 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:04.001621 ignition[1096]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:53:04.016734 ignition[1096]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:53:04.016734 ignition[1096]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:53:04.114043 ignition[1096]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:53:04.120296 ignition[1096]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:53:04.120296 ignition[1096]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:53:04.115023 unknown[1096]: wrote ssh authorized keys file for user: core Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:53:04.135829 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:53:04.682181 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 23 23:53:04.942844 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:53:04.942844 ignition[1096]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:53:04.958166 ignition[1096]: INFO : files: files passed Jan 23 23:53:04.958166 ignition[1096]: INFO : Ignition finished successfully Jan 23 23:53:04.953317 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:53:04.986762 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:53:05.005035 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:53:05.016905 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:53:05.053251 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:53:05.053251 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:53:05.022813 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:53:05.073607 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:53:05.030294 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:53:05.039783 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:53:05.066813 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:53:05.103518 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:53:05.103650 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:53:05.113233 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:53:05.122877 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:53:05.131570 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:53:05.144027 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:53:05.159691 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:53:05.173076 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:53:05.186400 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:53:05.191424 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:53:05.200983 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:53:05.209545 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:53:05.209667 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:53:05.221959 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:53:05.226308 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:53:05.234864 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:53:05.243398 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:53:05.252416 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:53:05.261451 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:53:05.270482 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:53:05.280719 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:53:05.289670 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:53:05.299729 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:53:05.307724 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:53:05.307841 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:53:05.320076 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:53:05.324754 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:53:05.333697 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:53:05.333800 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:53:05.343057 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:53:05.343171 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:53:05.356124 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:53:05.356237 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:53:05.361448 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:53:05.361543 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:53:05.369342 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:53:05.425196 ignition[1149]: INFO : Ignition 2.19.0 Jan 23 23:53:05.425196 ignition[1149]: INFO : Stage: umount Jan 23 23:53:05.425196 ignition[1149]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:53:05.425196 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:53:05.425196 ignition[1149]: INFO : umount: umount passed Jan 23 23:53:05.425196 ignition[1149]: INFO : Ignition finished successfully Jan 23 23:53:05.369430 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:53:05.394857 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:53:05.415199 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:53:05.426818 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:53:05.426993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:53:05.432202 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:53:05.432297 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:53:05.450437 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:53:05.451293 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:53:05.451406 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:53:05.465312 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:53:05.465639 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:53:05.473873 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:53:05.473926 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:53:05.483675 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:53:05.483721 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:53:05.491862 systemd[1]: Stopped target network.target - Network. Jan 23 23:53:05.500328 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:53:05.500375 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:53:05.509729 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:53:05.519209 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:53:05.527573 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:53:05.533148 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:53:05.540548 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:53:05.549101 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:53:05.549145 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:53:05.557565 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:53:05.557599 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:53:05.566154 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:53:05.566204 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:53:05.574131 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:53:05.574170 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:53:05.582770 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:53:05.591230 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:53:05.602148 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:53:05.602239 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:53:05.610866 systemd-networkd[904]: eth0: DHCPv6 lease lost Jan 23 23:53:05.615369 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:53:05.615491 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:53:05.626470 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:53:05.626538 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:53:05.651018 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:53:05.800096 kernel: hv_netvsc 7ced8dc0-449d-7ced-8dc0-449d7ced8dc0 eth0: Data path switched from VF: enP46020s1 Jan 23 23:53:05.659029 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:53:05.659102 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:53:05.668232 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:53:05.682401 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:53:05.683509 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:53:05.695737 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:53:05.695907 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:53:05.721104 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:53:05.721174 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:53:05.726260 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:53:05.726301 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:53:05.735574 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:53:05.735629 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:53:05.748219 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:53:05.748277 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:53:05.757542 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:53:05.757589 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:53:05.784733 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:53:05.795116 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:53:05.795203 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:53:05.804686 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:53:05.804743 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:53:05.814147 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:53:05.814205 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:53:05.819547 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 23:53:05.819592 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:53:05.835561 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:53:05.835614 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:53:05.845964 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:53:05.846008 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:53:05.855623 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:53:05.855668 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:53:05.865524 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:53:05.865672 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:53:05.874637 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:53:05.874714 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:53:05.884165 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:53:05.884259 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:53:05.893579 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:53:05.893701 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:53:05.903786 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:53:05.929869 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:53:06.159757 systemd[1]: Switching root. Jan 23 23:53:06.187600 systemd-journald[218]: Journal stopped Jan 23 23:53:11.508266 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 23 23:53:11.508294 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:53:11.508304 kernel: SELinux: policy capability open_perms=1 Jan 23 23:53:11.508314 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:53:11.508322 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:53:11.508330 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:53:11.508338 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:53:11.508347 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:53:11.508355 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:53:11.508363 kernel: audit: type=1403 audit(1769212388.273:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:53:11.508374 systemd[1]: Successfully loaded SELinux policy in 164.494ms. Jan 23 23:53:11.508383 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.953ms. Jan 23 23:53:11.508393 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:53:11.508402 systemd[1]: Detected virtualization microsoft. Jan 23 23:53:11.508412 systemd[1]: Detected architecture arm64. Jan 23 23:53:11.508422 systemd[1]: Detected first boot. Jan 23 23:53:11.508432 systemd[1]: Hostname set to . Jan 23 23:53:11.508441 systemd[1]: Initializing machine ID from random generator. Jan 23 23:53:11.508450 zram_generator::config[1208]: No configuration found. Jan 23 23:53:11.508462 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:53:11.508471 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:53:11.508482 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 23:53:11.508492 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:53:11.508501 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:53:11.508511 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:53:11.508520 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:53:11.508537 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:53:11.508548 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:53:11.508559 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:53:11.508569 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:53:11.508578 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:53:11.508588 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:53:11.508597 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:53:11.508606 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:53:11.508616 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:53:11.508625 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:53:11.508635 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 23:53:11.508645 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:53:11.508655 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:53:11.508664 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:53:11.508676 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:53:11.508686 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:53:11.508695 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:53:11.508705 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:53:11.508716 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:53:11.508725 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:53:11.508734 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:53:11.508744 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:53:11.508753 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:53:11.508763 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:53:11.508772 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:53:11.508784 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:53:11.508794 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:53:11.508803 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:53:11.508813 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:53:11.508823 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:53:11.508832 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:53:11.508843 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:53:11.508853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:53:11.508863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:53:11.508873 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:53:11.508884 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:53:11.508893 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:53:11.508903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:53:11.508912 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:53:11.508922 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:53:11.508934 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:53:11.508944 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 23 23:53:11.508954 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 23 23:53:11.508963 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:53:11.508973 kernel: fuse: init (API version 7.39) Jan 23 23:53:11.508981 kernel: loop: module loaded Jan 23 23:53:11.508990 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:53:11.509000 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:53:11.509011 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:53:11.509035 systemd-journald[1329]: Collecting audit messages is disabled. Jan 23 23:53:11.509055 systemd-journald[1329]: Journal started Jan 23 23:53:11.509077 systemd-journald[1329]: Runtime Journal (/run/log/journal/a977498bcef6407e8d9d8ca03a86d38d) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:53:11.525723 kernel: ACPI: bus type drm_connector registered Jan 23 23:53:11.540039 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:53:11.551474 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:53:11.552576 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:53:11.557156 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:53:11.561745 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:53:11.565980 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:53:11.570534 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:53:11.575632 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:53:11.579942 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:53:11.585835 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:53:11.591414 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:53:11.591574 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:53:11.596676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:53:11.596816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:53:11.601777 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:53:11.601922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:53:11.607142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:53:11.607286 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:53:11.612807 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:53:11.612957 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:53:11.617729 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:53:11.617959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:53:11.623035 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:53:11.628029 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:53:11.633828 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:53:11.639632 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:53:11.652571 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:53:11.663650 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:53:11.669992 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:53:11.674826 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:53:11.700653 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:53:11.706416 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:53:11.711340 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:53:11.712458 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:53:11.717128 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:53:11.718262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:53:11.725680 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:53:11.735709 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:53:11.746307 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:53:11.754260 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:53:11.760037 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:53:11.770486 systemd-journald[1329]: Time spent on flushing to /var/log/journal/a977498bcef6407e8d9d8ca03a86d38d is 41.596ms for 868 entries. Jan 23 23:53:11.770486 systemd-journald[1329]: System Journal (/var/log/journal/a977498bcef6407e8d9d8ca03a86d38d) is 11.8M, max 2.6G, 2.6G free. Jan 23 23:53:11.856847 systemd-journald[1329]: Received client request to flush runtime journal. Jan 23 23:53:11.856901 systemd-journald[1329]: /var/log/journal/a977498bcef6407e8d9d8ca03a86d38d/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 23 23:53:11.856924 systemd-journald[1329]: Rotating system journal. Jan 23 23:53:11.777229 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:53:11.784092 udevadm[1369]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:53:11.861771 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:53:11.888335 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Jan 23 23:53:11.888349 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Jan 23 23:53:11.892422 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:53:11.905949 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:53:11.910786 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:53:12.048396 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:53:12.060780 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:53:12.082496 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Jan 23 23:53:12.082842 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Jan 23 23:53:12.089899 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:53:12.598961 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:53:12.611877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:53:12.632038 systemd-udevd[1396]: Using default interface naming scheme 'v255'. Jan 23 23:53:12.784991 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:53:12.805834 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:53:12.844740 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:53:12.855631 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 23 23:53:12.939028 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:53:12.968958 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#198 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:53:12.994547 kernel: hv_vmbus: registering driver hv_balloon Jan 23 23:53:13.009426 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 23:53:13.009512 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 23 23:53:13.013635 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 23:53:13.047723 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:53:13.062308 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 23:53:13.062392 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 23:53:13.068087 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 23:53:13.071988 systemd-networkd[1408]: lo: Link UP Jan 23 23:53:13.071994 systemd-networkd[1408]: lo: Gained carrier Jan 23 23:53:13.073734 systemd-networkd[1408]: Enumeration completed Jan 23 23:53:13.074483 kernel: Console: switching to colour dummy device 80x25 Jan 23 23:53:13.074651 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:53:13.074828 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:53:13.074891 systemd-networkd[1408]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:53:13.079555 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:53:13.090667 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:53:13.108626 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1401) Jan 23 23:53:13.160985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:53:13.161243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:53:13.165541 kernel: mlx5_core b3c4:00:02.0 enP46020s1: Link up Jan 23 23:53:13.183276 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:53:13.193547 kernel: hv_netvsc 7ced8dc0-449d-7ced-8dc0-449d7ced8dc0 eth0: Data path switched to VF: enP46020s1 Jan 23 23:53:13.194619 systemd-networkd[1408]: enP46020s1: Link UP Jan 23 23:53:13.194711 systemd-networkd[1408]: eth0: Link UP Jan 23 23:53:13.194715 systemd-networkd[1408]: eth0: Gained carrier Jan 23 23:53:13.194727 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:53:13.198813 systemd-networkd[1408]: enP46020s1: Gained carrier Jan 23 23:53:13.204563 systemd-networkd[1408]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:53:13.216741 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:53:13.293960 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:53:13.305646 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:53:13.362562 lvm[1488]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:53:13.389942 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:53:13.395949 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:53:13.405660 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:53:13.409661 lvm[1491]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:53:13.436061 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:53:13.441622 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:53:13.446886 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:53:13.446916 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:53:13.451101 systemd[1]: Reached target machines.target - Containers. Jan 23 23:53:13.456125 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:53:13.467722 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:53:13.473553 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:53:13.477961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:53:13.478896 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:53:13.484576 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:53:13.491686 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:53:13.512314 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:53:13.526578 kernel: loop0: detected capacity change from 0 to 207008 Jan 23 23:53:13.566463 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:53:13.584623 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:53:13.586198 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:53:13.630641 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:53:13.668917 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:53:13.677593 kernel: loop1: detected capacity change from 0 to 31320 Jan 23 23:53:14.062569 kernel: loop2: detected capacity change from 0 to 114432 Jan 23 23:53:14.395551 kernel: loop3: detected capacity change from 0 to 114328 Jan 23 23:53:14.790549 kernel: loop4: detected capacity change from 0 to 207008 Jan 23 23:53:14.810554 kernel: loop5: detected capacity change from 0 to 31320 Jan 23 23:53:14.823547 kernel: loop6: detected capacity change from 0 to 114432 Jan 23 23:53:14.835584 kernel: loop7: detected capacity change from 0 to 114328 Jan 23 23:53:14.843387 (sd-merge)[1516]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 23:53:14.843839 (sd-merge)[1516]: Merged extensions into '/usr'. Jan 23 23:53:14.848064 systemd[1]: Reloading requested from client PID 1498 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:53:14.848328 systemd[1]: Reloading... Jan 23 23:53:14.869680 systemd-networkd[1408]: eth0: Gained IPv6LL Jan 23 23:53:14.917560 zram_generator::config[1548]: No configuration found. Jan 23 23:53:15.052349 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:53:15.129962 systemd[1]: Reloading finished in 281 ms. Jan 23 23:53:15.145632 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:53:15.152156 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:53:15.164684 systemd[1]: Starting ensure-sysext.service... Jan 23 23:53:15.170700 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:53:15.178139 systemd[1]: Reloading requested from client PID 1607 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:53:15.178253 systemd[1]: Reloading... Jan 23 23:53:15.210280 systemd-tmpfiles[1608]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:53:15.210559 systemd-tmpfiles[1608]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:53:15.212768 systemd-tmpfiles[1608]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:53:15.212993 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Jan 23 23:53:15.213036 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Jan 23 23:53:15.230115 systemd-tmpfiles[1608]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:53:15.230127 systemd-tmpfiles[1608]: Skipping /boot Jan 23 23:53:15.246562 systemd-tmpfiles[1608]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:53:15.246576 systemd-tmpfiles[1608]: Skipping /boot Jan 23 23:53:15.260572 zram_generator::config[1633]: No configuration found. Jan 23 23:53:15.386515 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:53:15.468910 systemd[1]: Reloading finished in 290 ms. Jan 23 23:53:15.486947 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:53:15.502749 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:53:15.509692 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:53:15.521689 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:53:15.528436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:53:15.540703 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:53:15.549186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:53:15.554053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:53:15.568859 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:53:15.580962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:53:15.590042 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:53:15.590872 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:53:15.591031 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:53:15.597087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:53:15.597244 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:53:15.607078 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:53:15.607289 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:53:15.621269 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:53:15.631499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:53:15.639843 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:53:15.647810 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:53:15.662002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:53:15.666963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:53:15.669264 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:53:15.670204 systemd-resolved[1706]: Positive Trust Anchors: Jan 23 23:53:15.670217 systemd-resolved[1706]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:53:15.670249 systemd-resolved[1706]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:53:15.675878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:53:15.676040 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:53:15.682592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:53:15.682746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:53:15.688394 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:53:15.688593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:53:15.699869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:53:15.704735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:53:15.711681 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:53:15.718879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:53:15.723720 augenrules[1740]: No rules Jan 23 23:53:15.725499 systemd-resolved[1706]: Using system hostname 'ci-4081.3.6-n-9dffd30f3c'. Jan 23 23:53:15.735952 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:53:15.740692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:53:15.740877 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:53:15.746075 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:53:15.751249 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:53:15.756568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:53:15.756729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:53:15.762269 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:53:15.762412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:53:15.769103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:53:15.769266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:53:15.775395 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:53:15.775672 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:53:15.784728 systemd[1]: Finished ensure-sysext.service. Jan 23 23:53:15.791259 systemd[1]: Reached target network.target - Network. Jan 23 23:53:15.795070 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:53:15.799662 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:53:15.804830 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:53:15.804900 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:53:16.043994 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:53:16.049788 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:53:18.668118 ldconfig[1495]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:53:18.685009 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:53:18.702685 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:53:18.716019 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:53:18.721217 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:53:18.726086 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:53:18.731949 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:53:18.737910 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:53:18.742664 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:53:18.748029 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:53:18.753256 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:53:18.753291 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:53:18.757207 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:53:18.762191 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:53:18.768221 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:53:18.773764 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:53:18.778699 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:53:18.783282 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:53:18.787260 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:53:18.791364 systemd[1]: System is tainted: cgroupsv1 Jan 23 23:53:18.791404 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:53:18.791428 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:53:18.823600 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 23:53:18.829334 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:53:18.845785 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:53:18.854712 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:53:18.861478 (chronyd)[1778]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 23 23:53:18.866625 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:53:18.870018 jq[1785]: false Jan 23 23:53:18.873701 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:53:18.882867 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:53:18.882915 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 23 23:53:18.884602 chronyd[1789]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 23 23:53:18.889718 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 23:53:18.895051 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 23:53:18.897008 KVP[1790]: KVP starting; pid is:1790 Jan 23 23:53:18.897651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:18.904738 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:53:18.910434 chronyd[1789]: Timezone right/UTC failed leap second check, ignoring Jan 23 23:53:18.910691 chronyd[1789]: Loaded seccomp filter (level 2) Jan 23 23:53:18.914782 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:53:18.920317 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:53:18.931726 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:53:18.943033 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:53:18.952977 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:53:18.955496 extend-filesystems[1787]: Found loop4 Jan 23 23:53:18.955496 extend-filesystems[1787]: Found loop5 Jan 23 23:53:18.955496 extend-filesystems[1787]: Found loop6 Jan 23 23:53:18.955496 extend-filesystems[1787]: Found loop7 Jan 23 23:53:18.955496 extend-filesystems[1787]: Found sda Jan 23 23:53:18.955496 extend-filesystems[1787]: Found sda1 Jan 23 23:53:18.955496 extend-filesystems[1787]: Found sda2 Jan 23 23:53:18.955496 extend-filesystems[1787]: Found sda3 Jan 23 23:53:18.955496 extend-filesystems[1787]: Found usr Jan 23 23:53:18.955496 extend-filesystems[1787]: Found sda4 Jan 23 23:53:18.955496 extend-filesystems[1787]: Found sda6 Jan 23 23:53:18.955496 extend-filesystems[1787]: Found sda7 Jan 23 23:53:18.955496 extend-filesystems[1787]: Found sda9 Jan 23 23:53:18.955496 extend-filesystems[1787]: Checking size of /dev/sda9 Jan 23 23:53:19.106825 kernel: hv_utils: KVP IC version 4.0 Jan 23 23:53:18.957689 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:53:19.012307 KVP[1790]: KVP LIC Version: 3.1 Jan 23 23:53:18.980679 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:53:19.052972 dbus-daemon[1782]: [system] SELinux support is enabled Jan 23 23:53:18.995163 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 23:53:19.109002 update_engine[1808]: I20260123 23:53:19.055655 1808 main.cc:92] Flatcar Update Engine starting Jan 23 23:53:19.109002 update_engine[1808]: I20260123 23:53:19.058070 1808 update_check_scheduler.cc:74] Next update check in 9m6s Jan 23 23:53:19.021021 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:53:19.109283 jq[1814]: true Jan 23 23:53:19.021785 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:53:19.025854 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:53:19.026077 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:53:19.049854 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:53:19.065809 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:53:19.074728 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:53:19.075015 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:53:19.112049 systemd-logind[1805]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 23 23:53:19.115414 systemd-logind[1805]: New seat seat0. Jan 23 23:53:19.118312 (ntainerd)[1826]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:53:19.123736 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:53:19.129860 jq[1825]: true Jan 23 23:53:19.139033 dbus-daemon[1782]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:53:19.145065 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:53:19.146153 coreos-metadata[1781]: Jan 23 23:53:19.145 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:53:19.149265 coreos-metadata[1781]: Jan 23 23:53:19.149 INFO Fetch successful Jan 23 23:53:19.149265 coreos-metadata[1781]: Jan 23 23:53:19.149 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 23:53:19.152364 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:53:19.152581 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:53:19.155720 coreos-metadata[1781]: Jan 23 23:53:19.154 INFO Fetch successful Jan 23 23:53:19.155720 coreos-metadata[1781]: Jan 23 23:53:19.155 INFO Fetching http://168.63.129.16/machine/7e5ae891-1236-4f15-8108-6c4fa94e4fd3/ddda3ed1%2D7551%2D4a66%2Db298%2D654863e10a17.%5Fci%2D4081.3.6%2Dn%2D9dffd30f3c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 23:53:19.159959 coreos-metadata[1781]: Jan 23 23:53:19.159 INFO Fetch successful Jan 23 23:53:19.159959 coreos-metadata[1781]: Jan 23 23:53:19.159 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:53:19.159899 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:53:19.160017 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:53:19.165955 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:53:19.173598 coreos-metadata[1781]: Jan 23 23:53:19.170 INFO Fetch successful Jan 23 23:53:19.174812 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:53:19.214512 extend-filesystems[1787]: Old size kept for /dev/sda9 Jan 23 23:53:19.214512 extend-filesystems[1787]: Found sr0 Jan 23 23:53:19.225090 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:53:19.225384 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:53:19.246070 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:53:19.253195 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:53:19.284553 bash[1869]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:53:19.285917 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:53:19.293201 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 23:53:19.366542 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1857) Jan 23 23:53:19.441414 sshd_keygen[1811]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:53:19.466228 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:53:19.479855 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:53:19.491212 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 23:53:19.497920 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:53:19.498664 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:53:19.506184 locksmithd[1843]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:53:19.516079 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:53:19.526265 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 23:53:19.538827 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:53:19.552910 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:53:19.558854 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 23:53:19.566322 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:53:19.746231 containerd[1826]: time="2026-01-23T23:53:19.746087980Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:53:19.777559 containerd[1826]: time="2026-01-23T23:53:19.777180420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:53:19.778922 containerd[1826]: time="2026-01-23T23:53:19.778628300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:53:19.778922 containerd[1826]: time="2026-01-23T23:53:19.778698740Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:53:19.778922 containerd[1826]: time="2026-01-23T23:53:19.778715380Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:53:19.778922 containerd[1826]: time="2026-01-23T23:53:19.778882660Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:53:19.778922 containerd[1826]: time="2026-01-23T23:53:19.778902260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:53:19.779059 containerd[1826]: time="2026-01-23T23:53:19.778967780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:53:19.779059 containerd[1826]: time="2026-01-23T23:53:19.778980260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:53:19.779553 containerd[1826]: time="2026-01-23T23:53:19.779199780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:53:19.779553 containerd[1826]: time="2026-01-23T23:53:19.779221820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:53:19.779553 containerd[1826]: time="2026-01-23T23:53:19.779235660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:53:19.779553 containerd[1826]: time="2026-01-23T23:53:19.779245300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:53:19.779553 containerd[1826]: time="2026-01-23T23:53:19.779323020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:53:19.779553 containerd[1826]: time="2026-01-23T23:53:19.779504580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:53:19.779681 containerd[1826]: time="2026-01-23T23:53:19.779655820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:53:19.779681 containerd[1826]: time="2026-01-23T23:53:19.779671140Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:53:19.779765 containerd[1826]: time="2026-01-23T23:53:19.779748420Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:53:19.779806 containerd[1826]: time="2026-01-23T23:53:19.779792740Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:53:19.795442 containerd[1826]: time="2026-01-23T23:53:19.795394860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:53:19.795442 containerd[1826]: time="2026-01-23T23:53:19.795459060Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:53:19.795666 containerd[1826]: time="2026-01-23T23:53:19.795475300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:53:19.795666 containerd[1826]: time="2026-01-23T23:53:19.795493940Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:53:19.795666 containerd[1826]: time="2026-01-23T23:53:19.795509540Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:53:19.796493 containerd[1826]: time="2026-01-23T23:53:19.795747980Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:53:19.796745 containerd[1826]: time="2026-01-23T23:53:19.796722020Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:53:19.796873 containerd[1826]: time="2026-01-23T23:53:19.796856420Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:53:19.796900 containerd[1826]: time="2026-01-23T23:53:19.796880340Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:53:19.796918 containerd[1826]: time="2026-01-23T23:53:19.796898620Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:53:19.796940 containerd[1826]: time="2026-01-23T23:53:19.796913540Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:53:19.796940 containerd[1826]: time="2026-01-23T23:53:19.796931340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:53:19.796976 containerd[1826]: time="2026-01-23T23:53:19.796947620Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:53:19.796976 containerd[1826]: time="2026-01-23T23:53:19.796966380Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:53:19.797007 containerd[1826]: time="2026-01-23T23:53:19.796984420Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:53:19.797024 containerd[1826]: time="2026-01-23T23:53:19.797004780Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:53:19.797042 containerd[1826]: time="2026-01-23T23:53:19.797021220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:53:19.797042 containerd[1826]: time="2026-01-23T23:53:19.797034820Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:53:19.797079 containerd[1826]: time="2026-01-23T23:53:19.797060020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797096 containerd[1826]: time="2026-01-23T23:53:19.797077580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797119 containerd[1826]: time="2026-01-23T23:53:19.797097300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797143 containerd[1826]: time="2026-01-23T23:53:19.797114860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797143 containerd[1826]: time="2026-01-23T23:53:19.797131500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797179 containerd[1826]: time="2026-01-23T23:53:19.797148460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797179 containerd[1826]: time="2026-01-23T23:53:19.797161220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797218 containerd[1826]: time="2026-01-23T23:53:19.797177300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797218 containerd[1826]: time="2026-01-23T23:53:19.797193580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797218 containerd[1826]: time="2026-01-23T23:53:19.797211100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797268 containerd[1826]: time="2026-01-23T23:53:19.797227420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797268 containerd[1826]: time="2026-01-23T23:53:19.797242940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797268 containerd[1826]: time="2026-01-23T23:53:19.797258900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797360 containerd[1826]: time="2026-01-23T23:53:19.797279140Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:53:19.797360 containerd[1826]: time="2026-01-23T23:53:19.797303900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797360 containerd[1826]: time="2026-01-23T23:53:19.797319180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797360 containerd[1826]: time="2026-01-23T23:53:19.797330380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:53:19.797426 containerd[1826]: time="2026-01-23T23:53:19.797384660Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:53:19.797426 containerd[1826]: time="2026-01-23T23:53:19.797406180Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:53:19.797426 containerd[1826]: time="2026-01-23T23:53:19.797421460Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:53:19.797481 containerd[1826]: time="2026-01-23T23:53:19.797437460Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:53:19.797481 containerd[1826]: time="2026-01-23T23:53:19.797450540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.797481 containerd[1826]: time="2026-01-23T23:53:19.797463060Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:53:19.797481 containerd[1826]: time="2026-01-23T23:53:19.797475780Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:53:19.797573 containerd[1826]: time="2026-01-23T23:53:19.797489060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:53:19.799231 containerd[1826]: time="2026-01-23T23:53:19.798830020Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:53:19.799231 containerd[1826]: time="2026-01-23T23:53:19.798903660Z" level=info msg="Connect containerd service" Jan 23 23:53:19.799231 containerd[1826]: time="2026-01-23T23:53:19.798940060Z" level=info msg="using legacy CRI server" Jan 23 23:53:19.799231 containerd[1826]: time="2026-01-23T23:53:19.798947460Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:53:19.799231 containerd[1826]: time="2026-01-23T23:53:19.799037740Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:53:19.800125 containerd[1826]: time="2026-01-23T23:53:19.799814260Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:53:19.800125 containerd[1826]: time="2026-01-23T23:53:19.800123140Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:53:19.800196 containerd[1826]: time="2026-01-23T23:53:19.800159540Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:53:19.800218 containerd[1826]: time="2026-01-23T23:53:19.800203660Z" level=info msg="Start subscribing containerd event" Jan 23 23:53:19.800246 containerd[1826]: time="2026-01-23T23:53:19.800233020Z" level=info msg="Start recovering state" Jan 23 23:53:19.800313 containerd[1826]: time="2026-01-23T23:53:19.800297100Z" level=info msg="Start event monitor" Jan 23 23:53:19.800313 containerd[1826]: time="2026-01-23T23:53:19.800311580Z" level=info msg="Start snapshots syncer" Jan 23 23:53:19.800370 containerd[1826]: time="2026-01-23T23:53:19.800336340Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:53:19.800370 containerd[1826]: time="2026-01-23T23:53:19.800344860Z" level=info msg="Start streaming server" Jan 23 23:53:19.800516 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:53:19.807399 containerd[1826]: time="2026-01-23T23:53:19.806105900Z" level=info msg="containerd successfully booted in 0.060926s" Jan 23 23:53:20.043856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:20.049111 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:53:20.049662 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:53:20.057607 systemd[1]: Startup finished in 13.496s (kernel) + 11.946s (userspace) = 25.443s. Jan 23 23:53:20.382680 login[1938]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:20.384877 login[1939]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:20.396584 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:53:20.397987 systemd-logind[1805]: New session 1 of user core. Jan 23 23:53:20.405939 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:53:20.409142 systemd-logind[1805]: New session 2 of user core. Jan 23 23:53:20.437216 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:53:20.447826 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:53:20.465396 kubelet[1952]: E0123 23:53:20.465359 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:53:20.465725 (systemd)[1966]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:53:20.468292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:53:20.468490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:53:20.610488 systemd[1966]: Queued start job for default target default.target. Jan 23 23:53:20.610848 systemd[1966]: Created slice app.slice - User Application Slice. Jan 23 23:53:20.610868 systemd[1966]: Reached target paths.target - Paths. Jan 23 23:53:20.610879 systemd[1966]: Reached target timers.target - Timers. Jan 23 23:53:20.621671 systemd[1966]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:53:20.628698 systemd[1966]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:53:20.628756 systemd[1966]: Reached target sockets.target - Sockets. Jan 23 23:53:20.628769 systemd[1966]: Reached target basic.target - Basic System. Jan 23 23:53:20.628811 systemd[1966]: Reached target default.target - Main User Target. Jan 23 23:53:20.628836 systemd[1966]: Startup finished in 155ms. Jan 23 23:53:20.629672 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:53:20.633810 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:53:20.637373 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:53:21.566526 waagent[1934]: 2026-01-23T23:53:21.566441Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 23 23:53:21.570960 waagent[1934]: 2026-01-23T23:53:21.570911Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 23 23:53:21.574388 waagent[1934]: 2026-01-23T23:53:21.574350Z INFO Daemon Daemon Python: 3.11.9 Jan 23 23:53:21.577840 waagent[1934]: 2026-01-23T23:53:21.577627Z INFO Daemon Daemon Run daemon Jan 23 23:53:21.580863 waagent[1934]: 2026-01-23T23:53:21.580822Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 23 23:53:21.587549 waagent[1934]: 2026-01-23T23:53:21.587505Z INFO Daemon Daemon Using waagent for provisioning Jan 23 23:53:21.591665 waagent[1934]: 2026-01-23T23:53:21.591540Z INFO Daemon Daemon Activate resource disk Jan 23 23:53:21.595322 waagent[1934]: 2026-01-23T23:53:21.595279Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 23:53:21.604574 waagent[1934]: 2026-01-23T23:53:21.604522Z INFO Daemon Daemon Found device: None Jan 23 23:53:21.608058 waagent[1934]: 2026-01-23T23:53:21.608023Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 23:53:21.614534 waagent[1934]: 2026-01-23T23:53:21.614496Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 23:53:21.624814 waagent[1934]: 2026-01-23T23:53:21.624765Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:53:21.629478 waagent[1934]: 2026-01-23T23:53:21.629434Z INFO Daemon Daemon Running default provisioning handler Jan 23 23:53:21.639773 waagent[1934]: 2026-01-23T23:53:21.639710Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 23:53:21.650741 waagent[1934]: 2026-01-23T23:53:21.650683Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 23:53:21.658650 waagent[1934]: 2026-01-23T23:53:21.658605Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 23:53:21.662635 waagent[1934]: 2026-01-23T23:53:21.662597Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 23:53:21.773007 waagent[1934]: 2026-01-23T23:53:21.772921Z INFO Daemon Daemon Successfully mounted dvd Jan 23 23:53:21.786311 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 23:53:21.788203 waagent[1934]: 2026-01-23T23:53:21.788017Z INFO Daemon Daemon Detect protocol endpoint Jan 23 23:53:21.791794 waagent[1934]: 2026-01-23T23:53:21.791749Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:53:21.795946 waagent[1934]: 2026-01-23T23:53:21.795908Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 23:53:21.801203 waagent[1934]: 2026-01-23T23:53:21.801163Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 23:53:21.805504 waagent[1934]: 2026-01-23T23:53:21.805465Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 23:53:21.809234 waagent[1934]: 2026-01-23T23:53:21.809199Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 23:53:21.855079 waagent[1934]: 2026-01-23T23:53:21.855038Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 23:53:21.860063 waagent[1934]: 2026-01-23T23:53:21.860036Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 23:53:21.864164 waagent[1934]: 2026-01-23T23:53:21.864126Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 23:53:22.583565 waagent[1934]: 2026-01-23T23:53:22.583289Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 23:53:22.588284 waagent[1934]: 2026-01-23T23:53:22.588230Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 23:53:22.596001 waagent[1934]: 2026-01-23T23:53:22.595956Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:53:22.614361 waagent[1934]: 2026-01-23T23:53:22.614321Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 23:53:22.618821 waagent[1934]: 2026-01-23T23:53:22.618783Z INFO Daemon Jan 23 23:53:22.620921 waagent[1934]: 2026-01-23T23:53:22.620887Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 7ce899f4-5284-4d75-be81-4a13678b9451 eTag: 5352841468276934797 source: Fabric] Jan 23 23:53:22.629486 waagent[1934]: 2026-01-23T23:53:22.629448Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 23:53:22.634727 waagent[1934]: 2026-01-23T23:53:22.634688Z INFO Daemon Jan 23 23:53:22.636800 waagent[1934]: 2026-01-23T23:53:22.636762Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:53:22.645770 waagent[1934]: 2026-01-23T23:53:22.645740Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 23:53:22.717374 waagent[1934]: 2026-01-23T23:53:22.717298Z INFO Daemon Downloaded certificate {'thumbprint': '75BC33B7ED8A10BE1E8AB1BCDA0D003C9E4D156B', 'hasPrivateKey': True} Jan 23 23:53:22.725057 waagent[1934]: 2026-01-23T23:53:22.725015Z INFO Daemon Fetch goal state completed Jan 23 23:53:22.734853 waagent[1934]: 2026-01-23T23:53:22.734801Z INFO Daemon Daemon Starting provisioning Jan 23 23:53:22.738753 waagent[1934]: 2026-01-23T23:53:22.738708Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 23:53:22.742325 waagent[1934]: 2026-01-23T23:53:22.742292Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-9dffd30f3c] Jan 23 23:53:22.768555 waagent[1934]: 2026-01-23T23:53:22.767875Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-9dffd30f3c] Jan 23 23:53:22.772751 waagent[1934]: 2026-01-23T23:53:22.772698Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 23:53:22.777513 waagent[1934]: 2026-01-23T23:53:22.777468Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 23:53:22.818877 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:53:22.818887 systemd-networkd[1408]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:53:22.818931 systemd-networkd[1408]: eth0: DHCP lease lost Jan 23 23:53:22.820090 waagent[1934]: 2026-01-23T23:53:22.820014Z INFO Daemon Daemon Create user account if not exists Jan 23 23:53:22.824349 waagent[1934]: 2026-01-23T23:53:22.824304Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 23:53:22.828562 systemd-networkd[1408]: eth0: DHCPv6 lease lost Jan 23 23:53:22.829120 waagent[1934]: 2026-01-23T23:53:22.829063Z INFO Daemon Daemon Configure sudoer Jan 23 23:53:22.832637 waagent[1934]: 2026-01-23T23:53:22.832591Z INFO Daemon Daemon Configure sshd Jan 23 23:53:22.835985 waagent[1934]: 2026-01-23T23:53:22.835914Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 23:53:22.845610 waagent[1934]: 2026-01-23T23:53:22.845554Z INFO Daemon Daemon Deploy ssh public key. Jan 23 23:53:22.853573 systemd-networkd[1408]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:53:23.923085 waagent[1934]: 2026-01-23T23:53:23.923027Z INFO Daemon Daemon Provisioning complete Jan 23 23:53:23.937084 waagent[1934]: 2026-01-23T23:53:23.937041Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 23:53:23.942116 waagent[1934]: 2026-01-23T23:53:23.942065Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 23:53:23.949873 waagent[1934]: 2026-01-23T23:53:23.949833Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 23 23:53:24.077506 waagent[2022]: 2026-01-23T23:53:24.076908Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 23 23:53:24.077506 waagent[2022]: 2026-01-23T23:53:24.077050Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 23 23:53:24.077506 waagent[2022]: 2026-01-23T23:53:24.077103Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 23 23:53:24.122782 waagent[2022]: 2026-01-23T23:53:24.122706Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 23 23:53:24.123109 waagent[2022]: 2026-01-23T23:53:24.123071Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:53:24.123236 waagent[2022]: 2026-01-23T23:53:24.123205Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:53:24.131419 waagent[2022]: 2026-01-23T23:53:24.131344Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:53:24.137397 waagent[2022]: 2026-01-23T23:53:24.137354Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 23:53:24.138038 waagent[2022]: 2026-01-23T23:53:24.137999Z INFO ExtHandler Jan 23 23:53:24.138178 waagent[2022]: 2026-01-23T23:53:24.138148Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 74c0e58e-3ef0-436b-a452-41f994667138 eTag: 5352841468276934797 source: Fabric] Jan 23 23:53:24.139809 waagent[2022]: 2026-01-23T23:53:24.138514Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 23:53:24.139809 waagent[2022]: 2026-01-23T23:53:24.139115Z INFO ExtHandler Jan 23 23:53:24.139809 waagent[2022]: 2026-01-23T23:53:24.139187Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:53:24.143560 waagent[2022]: 2026-01-23T23:53:24.143252Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 23:53:24.216877 waagent[2022]: 2026-01-23T23:53:24.216747Z INFO ExtHandler Downloaded certificate {'thumbprint': '75BC33B7ED8A10BE1E8AB1BCDA0D003C9E4D156B', 'hasPrivateKey': True} Jan 23 23:53:24.217342 waagent[2022]: 2026-01-23T23:53:24.217298Z INFO ExtHandler Fetch goal state completed Jan 23 23:53:24.230105 waagent[2022]: 2026-01-23T23:53:24.230053Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2022 Jan 23 23:53:24.230258 waagent[2022]: 2026-01-23T23:53:24.230226Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 23:53:24.231871 waagent[2022]: 2026-01-23T23:53:24.231830Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 23:53:24.232230 waagent[2022]: 2026-01-23T23:53:24.232196Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 23:53:24.266705 waagent[2022]: 2026-01-23T23:53:24.266667Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 23:53:24.266896 waagent[2022]: 2026-01-23T23:53:24.266861Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 23:53:24.273128 waagent[2022]: 2026-01-23T23:53:24.273091Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 23:53:24.279280 systemd[1]: Reloading requested from client PID 2035 ('systemctl') (unit waagent.service)... Jan 23 23:53:24.279524 systemd[1]: Reloading... Jan 23 23:53:24.355594 zram_generator::config[2065]: No configuration found. Jan 23 23:53:24.470285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:53:24.556021 systemd[1]: Reloading finished in 276 ms. Jan 23 23:53:24.576453 waagent[2022]: 2026-01-23T23:53:24.576240Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 23 23:53:24.582421 systemd[1]: Reloading requested from client PID 2128 ('systemctl') (unit waagent.service)... Jan 23 23:53:24.582522 systemd[1]: Reloading... Jan 23 23:53:24.661634 zram_generator::config[2162]: No configuration found. Jan 23 23:53:24.773516 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:53:24.855718 systemd[1]: Reloading finished in 272 ms. Jan 23 23:53:24.875520 waagent[2022]: 2026-01-23T23:53:24.874771Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 23:53:24.875520 waagent[2022]: 2026-01-23T23:53:24.874931Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 23:53:25.355565 waagent[2022]: 2026-01-23T23:53:25.355007Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 23:53:25.355995 waagent[2022]: 2026-01-23T23:53:25.355605Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 23 23:53:25.356431 waagent[2022]: 2026-01-23T23:53:25.356374Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 23:53:25.356836 waagent[2022]: 2026-01-23T23:53:25.356734Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 23:53:25.357181 waagent[2022]: 2026-01-23T23:53:25.357043Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 23:53:25.357384 waagent[2022]: 2026-01-23T23:53:25.357295Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 23:53:25.357808 waagent[2022]: 2026-01-23T23:53:25.357713Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 23:53:25.357940 waagent[2022]: 2026-01-23T23:53:25.357809Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 23:53:25.358543 waagent[2022]: 2026-01-23T23:53:25.358102Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:53:25.358543 waagent[2022]: 2026-01-23T23:53:25.358199Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:53:25.358543 waagent[2022]: 2026-01-23T23:53:25.358407Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 23:53:25.358660 waagent[2022]: 2026-01-23T23:53:25.358606Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 23:53:25.358660 waagent[2022]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 23:53:25.358660 waagent[2022]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 23:53:25.358660 waagent[2022]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 23:53:25.358660 waagent[2022]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:53:25.358660 waagent[2022]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:53:25.358660 waagent[2022]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:53:25.359037 waagent[2022]: 2026-01-23T23:53:25.358997Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 23:53:25.359613 waagent[2022]: 2026-01-23T23:53:25.359525Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:53:25.359676 waagent[2022]: 2026-01-23T23:53:25.359641Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:53:25.359810 waagent[2022]: 2026-01-23T23:53:25.359769Z INFO EnvHandler ExtHandler Configure routes Jan 23 23:53:25.359867 waagent[2022]: 2026-01-23T23:53:25.359840Z INFO EnvHandler ExtHandler Gateway:None Jan 23 23:53:25.359912 waagent[2022]: 2026-01-23T23:53:25.359888Z INFO EnvHandler ExtHandler Routes:None Jan 23 23:53:25.364739 waagent[2022]: 2026-01-23T23:53:25.364700Z INFO ExtHandler ExtHandler Jan 23 23:53:25.365106 waagent[2022]: 2026-01-23T23:53:25.365047Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e9e65b84-732b-4e2b-b2f9-3dc404967fa4 correlation 33acef35-fefb-4d99-b1d6-26b4450da3aa created: 2026-01-23T23:52:26.148453Z] Jan 23 23:53:25.365983 waagent[2022]: 2026-01-23T23:53:25.365934Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 23:53:25.367827 waagent[2022]: 2026-01-23T23:53:25.367756Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jan 23 23:53:25.405974 waagent[2022]: 2026-01-23T23:53:25.405915Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 681074AC-B54F-4016-AAB8-5B2B556F8D0A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 23 23:53:25.418661 waagent[2022]: 2026-01-23T23:53:25.418589Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 23:53:25.418661 waagent[2022]: Executing ['ip', '-a', '-o', 'link']: Jan 23 23:53:25.418661 waagent[2022]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 23:53:25.418661 waagent[2022]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:c0:44:9d brd ff:ff:ff:ff:ff:ff Jan 23 23:53:25.418661 waagent[2022]: 3: enP46020s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:c0:44:9d brd ff:ff:ff:ff:ff:ff\ altname enP46020p0s2 Jan 23 23:53:25.418661 waagent[2022]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 23:53:25.418661 waagent[2022]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 23:53:25.418661 waagent[2022]: 2: eth0 inet 10.200.20.35/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 23:53:25.418661 waagent[2022]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 23:53:25.418661 waagent[2022]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 23:53:25.418661 waagent[2022]: 2: eth0 inet6 fe80::7eed:8dff:fec0:449d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 23:53:25.475601 waagent[2022]: 2026-01-23T23:53:25.475447Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 23 23:53:25.475601 waagent[2022]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:53:25.475601 waagent[2022]: pkts bytes target prot opt in out source destination Jan 23 23:53:25.475601 waagent[2022]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:53:25.475601 waagent[2022]: pkts bytes target prot opt in out source destination Jan 23 23:53:25.475601 waagent[2022]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:53:25.475601 waagent[2022]: pkts bytes target prot opt in out source destination Jan 23 23:53:25.475601 waagent[2022]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:53:25.475601 waagent[2022]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:53:25.475601 waagent[2022]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:53:25.478293 waagent[2022]: 2026-01-23T23:53:25.478239Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 23:53:25.478293 waagent[2022]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:53:25.478293 waagent[2022]: pkts bytes target prot opt in out source destination Jan 23 23:53:25.478293 waagent[2022]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:53:25.478293 waagent[2022]: pkts bytes target prot opt in out source destination Jan 23 23:53:25.478293 waagent[2022]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:53:25.478293 waagent[2022]: pkts bytes target prot opt in out source destination Jan 23 23:53:25.478293 waagent[2022]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:53:25.478293 waagent[2022]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:53:25.478293 waagent[2022]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:53:25.478524 waagent[2022]: 2026-01-23T23:53:25.478491Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 23 23:53:30.573732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:53:30.583758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:30.684704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:30.694809 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:53:30.764948 kubelet[2264]: E0123 23:53:30.764888 2264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:53:30.769717 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:53:30.769876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:53:40.823798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:53:40.831700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:40.964693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:40.968588 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:53:41.077610 kubelet[2284]: E0123 23:53:41.077484 2284 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:53:41.080709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:53:41.080880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:53:42.210782 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:53:42.217754 systemd[1]: Started sshd@0-10.200.20.35:22-10.200.16.10:35720.service - OpenSSH per-connection server daemon (10.200.16.10:35720). Jan 23 23:53:42.700926 chronyd[1789]: Selected source PHC0 Jan 23 23:53:42.738165 sshd[2292]: Accepted publickey for core from 10.200.16.10 port 35720 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:42.739545 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:42.743782 systemd-logind[1805]: New session 3 of user core. Jan 23 23:53:42.749815 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:53:43.163766 systemd[1]: Started sshd@1-10.200.20.35:22-10.200.16.10:35736.service - OpenSSH per-connection server daemon (10.200.16.10:35736). Jan 23 23:53:43.607587 sshd[2297]: Accepted publickey for core from 10.200.16.10 port 35736 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:43.608893 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:43.612432 systemd-logind[1805]: New session 4 of user core. Jan 23 23:53:43.620814 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:53:43.944742 sshd[2297]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:43.948136 systemd[1]: sshd@1-10.200.20.35:22-10.200.16.10:35736.service: Deactivated successfully. Jan 23 23:53:43.950940 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:53:43.951353 systemd-logind[1805]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:53:43.952487 systemd-logind[1805]: Removed session 4. Jan 23 23:53:44.032744 systemd[1]: Started sshd@2-10.200.20.35:22-10.200.16.10:35748.service - OpenSSH per-connection server daemon (10.200.16.10:35748). Jan 23 23:53:44.514857 sshd[2305]: Accepted publickey for core from 10.200.16.10 port 35748 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:44.516193 sshd[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:44.519859 systemd-logind[1805]: New session 5 of user core. Jan 23 23:53:44.529879 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:53:44.862715 sshd[2305]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:44.865298 systemd-logind[1805]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:53:44.865520 systemd[1]: sshd@2-10.200.20.35:22-10.200.16.10:35748.service: Deactivated successfully. Jan 23 23:53:44.868157 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:53:44.869525 systemd-logind[1805]: Removed session 5. Jan 23 23:53:44.953981 systemd[1]: Started sshd@3-10.200.20.35:22-10.200.16.10:35762.service - OpenSSH per-connection server daemon (10.200.16.10:35762). Jan 23 23:53:45.434640 sshd[2313]: Accepted publickey for core from 10.200.16.10 port 35762 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:45.435960 sshd[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:45.440505 systemd-logind[1805]: New session 6 of user core. Jan 23 23:53:45.454856 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:53:45.787249 sshd[2313]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:45.789880 systemd-logind[1805]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:53:45.790944 systemd[1]: sshd@3-10.200.20.35:22-10.200.16.10:35762.service: Deactivated successfully. Jan 23 23:53:45.794417 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:53:45.795942 systemd-logind[1805]: Removed session 6. Jan 23 23:53:45.874847 systemd[1]: Started sshd@4-10.200.20.35:22-10.200.16.10:35774.service - OpenSSH per-connection server daemon (10.200.16.10:35774). Jan 23 23:53:46.358852 sshd[2321]: Accepted publickey for core from 10.200.16.10 port 35774 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:46.361314 sshd[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:46.364838 systemd-logind[1805]: New session 7 of user core. Jan 23 23:53:46.376948 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:53:46.737259 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:53:46.737561 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:46.752340 sudo[2325]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:46.830797 sshd[2321]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:46.834380 systemd[1]: sshd@4-10.200.20.35:22-10.200.16.10:35774.service: Deactivated successfully. Jan 23 23:53:46.836978 systemd-logind[1805]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:53:46.837869 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:53:46.838514 systemd-logind[1805]: Removed session 7. Jan 23 23:53:46.917746 systemd[1]: Started sshd@5-10.200.20.35:22-10.200.16.10:35786.service - OpenSSH per-connection server daemon (10.200.16.10:35786). Jan 23 23:53:47.400421 sshd[2330]: Accepted publickey for core from 10.200.16.10 port 35786 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:47.401776 sshd[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:47.405854 systemd-logind[1805]: New session 8 of user core. Jan 23 23:53:47.411806 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:53:47.675252 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:53:47.675930 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:47.679138 sudo[2335]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:47.683644 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:53:47.683903 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:47.708065 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:53:47.709125 auditctl[2338]: No rules Jan 23 23:53:47.709585 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:53:47.709809 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:53:47.713280 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:53:47.735378 augenrules[2357]: No rules Jan 23 23:53:47.736750 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:53:47.739066 sudo[2334]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:47.817730 sshd[2330]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:47.821219 systemd[1]: sshd@5-10.200.20.35:22-10.200.16.10:35786.service: Deactivated successfully. Jan 23 23:53:47.823731 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:53:47.824598 systemd-logind[1805]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:53:47.825319 systemd-logind[1805]: Removed session 8. Jan 23 23:53:47.900760 systemd[1]: Started sshd@6-10.200.20.35:22-10.200.16.10:35798.service - OpenSSH per-connection server daemon (10.200.16.10:35798). Jan 23 23:53:48.386372 sshd[2366]: Accepted publickey for core from 10.200.16.10 port 35798 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:48.387697 sshd[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:48.391803 systemd-logind[1805]: New session 9 of user core. Jan 23 23:53:48.397819 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:53:48.661870 sudo[2370]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:53:48.662144 sudo[2370]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:49.097521 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:49.104801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:49.132951 systemd[1]: Reloading requested from client PID 2404 ('systemctl') (unit session-9.scope)... Jan 23 23:53:49.132966 systemd[1]: Reloading... Jan 23 23:53:49.231756 zram_generator::config[2444]: No configuration found. Jan 23 23:53:49.354144 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:53:49.439179 systemd[1]: Reloading finished in 305 ms. Jan 23 23:53:49.482519 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:53:49.482770 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:53:49.483111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:49.486234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:49.636711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:49.645891 (kubelet)[2523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:53:49.813089 kubelet[2523]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:53:49.813453 kubelet[2523]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:53:49.813818 kubelet[2523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:53:49.813982 kubelet[2523]: I0123 23:53:49.813945 2523 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:53:50.263889 kubelet[2523]: I0123 23:53:50.263857 2523 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:53:50.264551 kubelet[2523]: I0123 23:53:50.264039 2523 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:53:50.264752 kubelet[2523]: I0123 23:53:50.264730 2523 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:53:50.288814 kubelet[2523]: I0123 23:53:50.288785 2523 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:53:50.294796 kubelet[2523]: E0123 23:53:50.294763 2523 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:53:50.294796 kubelet[2523]: I0123 23:53:50.294797 2523 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:53:50.298781 kubelet[2523]: I0123 23:53:50.298763 2523 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:53:50.299726 kubelet[2523]: I0123 23:53:50.299690 2523 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:53:50.299894 kubelet[2523]: I0123 23:53:50.299729 2523 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.20.35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 23 23:53:50.299984 kubelet[2523]: I0123 23:53:50.299904 2523 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:53:50.299984 kubelet[2523]: I0123 23:53:50.299913 2523 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:53:50.300047 kubelet[2523]: I0123 23:53:50.300030 2523 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:53:50.302769 kubelet[2523]: I0123 23:53:50.302749 2523 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:53:50.302804 kubelet[2523]: I0123 23:53:50.302776 2523 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:53:50.302876 kubelet[2523]: I0123 23:53:50.302863 2523 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:53:50.302901 kubelet[2523]: I0123 23:53:50.302879 2523 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:53:50.303952 kubelet[2523]: E0123 23:53:50.303932 2523 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:53:50.303990 kubelet[2523]: E0123 23:53:50.303976 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:53:50.306427 kubelet[2523]: I0123 23:53:50.306402 2523 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:53:50.306869 kubelet[2523]: I0123 23:53:50.306851 2523 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:53:50.306934 kubelet[2523]: W0123 23:53:50.306906 2523 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:53:50.307438 kubelet[2523]: I0123 23:53:50.307421 2523 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:53:50.307492 kubelet[2523]: I0123 23:53:50.307461 2523 server.go:1287] "Started kubelet" Jan 23 23:53:50.308567 kubelet[2523]: I0123 23:53:50.308040 2523 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:53:50.308924 kubelet[2523]: I0123 23:53:50.308909 2523 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:53:50.313925 kubelet[2523]: I0123 23:53:50.313857 2523 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:53:50.314167 kubelet[2523]: I0123 23:53:50.314146 2523 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:53:50.315912 kubelet[2523]: I0123 23:53:50.315845 2523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:53:50.316990 kubelet[2523]: E0123 23:53:50.316869 2523 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.35.188d8154e8ae24c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.35,UID:10.200.20.35,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.20.35,},FirstTimestamp:2026-01-23 23:53:50.307435713 +0000 UTC m=+0.658094081,LastTimestamp:2026-01-23 23:53:50.307435713 +0000 UTC m=+0.658094081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.35,}" Jan 23 23:53:50.319595 kubelet[2523]: I0123 23:53:50.319570 2523 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:53:50.324859 kubelet[2523]: I0123 23:53:50.324837 2523 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:53:50.325228 kubelet[2523]: E0123 23:53:50.325212 2523 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" Jan 23 23:53:50.326657 kubelet[2523]: W0123 23:53:50.326632 2523 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 23 23:53:50.326712 kubelet[2523]: E0123 23:53:50.326667 2523 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 23 23:53:50.326857 kubelet[2523]: W0123 23:53:50.326821 2523 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.20.35" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 23 23:53:50.326857 kubelet[2523]: E0123 23:53:50.326851 2523 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.200.20.35\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 23 23:53:50.329693 kubelet[2523]: I0123 23:53:50.329660 2523 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:53:50.329693 kubelet[2523]: I0123 23:53:50.329727 2523 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:53:50.329915 kubelet[2523]: I0123 23:53:50.329895 2523 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:53:50.330006 kubelet[2523]: I0123 23:53:50.329985 2523 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:53:50.331495 kubelet[2523]: I0123 23:53:50.331472 2523 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:53:50.345819 kubelet[2523]: E0123 23:53:50.345795 2523 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.35\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 23 23:53:50.346545 kubelet[2523]: W0123 23:53:50.345959 2523 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 23 23:53:50.346545 kubelet[2523]: E0123 23:53:50.345985 2523 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 23 23:53:50.346545 kubelet[2523]: E0123 23:53:50.346116 2523 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.35.188d8154e9314031 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.35,UID:10.200.20.35,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:10.200.20.35,},FirstTimestamp:2026-01-23 23:53:50.316027953 +0000 UTC m=+0.666686321,LastTimestamp:2026-01-23 23:53:50.316027953 +0000 UTC m=+0.666686321,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.35,}" Jan 23 23:53:50.354858 kubelet[2523]: I0123 23:53:50.354635 2523 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:53:50.354858 kubelet[2523]: I0123 23:53:50.354651 2523 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:53:50.354858 kubelet[2523]: I0123 23:53:50.354666 2523 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:53:50.360552 kubelet[2523]: I0123 23:53:50.360397 2523 policy_none.go:49] "None policy: Start" Jan 23 23:53:50.360552 kubelet[2523]: I0123 23:53:50.360445 2523 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:53:50.360552 kubelet[2523]: I0123 23:53:50.360459 2523 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:53:50.367735 kubelet[2523]: I0123 23:53:50.367706 2523 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:53:50.367903 kubelet[2523]: I0123 23:53:50.367889 2523 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:53:50.367944 kubelet[2523]: I0123 23:53:50.367904 2523 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:53:50.371151 kubelet[2523]: I0123 23:53:50.369702 2523 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:53:50.374160 kubelet[2523]: E0123 23:53:50.372554 2523 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:53:50.374160 kubelet[2523]: E0123 23:53:50.372592 2523 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.35\" not found" Jan 23 23:53:50.376077 kubelet[2523]: I0123 23:53:50.376045 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:53:50.377112 kubelet[2523]: I0123 23:53:50.377064 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:53:50.377112 kubelet[2523]: I0123 23:53:50.377092 2523 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:53:50.377112 kubelet[2523]: I0123 23:53:50.377111 2523 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:53:50.377112 kubelet[2523]: I0123 23:53:50.377117 2523 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:53:50.377239 kubelet[2523]: E0123 23:53:50.377211 2523 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 23:53:50.469511 kubelet[2523]: I0123 23:53:50.468796 2523 kubelet_node_status.go:75] "Attempting to register node" node="10.200.20.35" Jan 23 23:53:50.477677 kubelet[2523]: I0123 23:53:50.477655 2523 kubelet_node_status.go:78] "Successfully registered node" node="10.200.20.35" Jan 23 23:53:50.477723 kubelet[2523]: E0123 23:53:50.477692 2523 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.200.20.35\": node \"10.200.20.35\" not found" Jan 23 23:53:50.517309 kubelet[2523]: E0123 23:53:50.517206 2523 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" Jan 23 23:53:50.617757 kubelet[2523]: E0123 23:53:50.617722 2523 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" Jan 23 23:53:50.670121 sudo[2370]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:50.718043 kubelet[2523]: E0123 23:53:50.718001 2523 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" Jan 23 23:53:50.749410 sshd[2366]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:50.753149 systemd[1]: sshd@6-10.200.20.35:22-10.200.16.10:35798.service: Deactivated successfully. Jan 23 23:53:50.755546 systemd-logind[1805]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:53:50.756080 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:53:50.759732 systemd-logind[1805]: Removed session 9. Jan 23 23:53:50.819046 kubelet[2523]: E0123 23:53:50.818944 2523 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" Jan 23 23:53:50.919592 kubelet[2523]: E0123 23:53:50.919559 2523 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" Jan 23 23:53:51.020182 kubelet[2523]: E0123 23:53:51.020155 2523 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" Jan 23 23:53:51.120834 kubelet[2523]: E0123 23:53:51.120801 2523 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" Jan 23 23:53:51.221932 kubelet[2523]: I0123 23:53:51.221806 2523 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 23 23:53:51.222327 containerd[1826]: time="2026-01-23T23:53:51.222228405Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:53:51.222926 kubelet[2523]: I0123 23:53:51.222389 2523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 23 23:53:51.268089 kubelet[2523]: I0123 23:53:51.267880 2523 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 23:53:51.268089 kubelet[2523]: W0123 23:53:51.268029 2523 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 23:53:51.268089 kubelet[2523]: W0123 23:53:51.268059 2523 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 23:53:51.304256 kubelet[2523]: I0123 23:53:51.304228 2523 apiserver.go:52] "Watching apiserver" Jan 23 23:53:51.304509 kubelet[2523]: E0123 23:53:51.304491 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:53:51.317016 kubelet[2523]: E0123 23:53:51.316883 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:53:51.330428 kubelet[2523]: I0123 23:53:51.330400 2523 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:53:51.335076 kubelet[2523]: I0123 23:53:51.335052 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/61471a5a-3646-4ffd-913d-c05fa79c7729-cni-net-dir\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335204 kubelet[2523]: I0123 23:53:51.335084 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/61471a5a-3646-4ffd-913d-c05fa79c7729-node-certs\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335204 kubelet[2523]: I0123 23:53:51.335103 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pxpj\" (UniqueName: \"kubernetes.io/projected/bea5b43f-d3f4-4e81-968d-5ea668c5eb4c-kube-api-access-6pxpj\") pod \"kube-proxy-tkjzh\" (UID: \"bea5b43f-d3f4-4e81-968d-5ea668c5eb4c\") " pod="kube-system/kube-proxy-tkjzh" Jan 23 23:53:51.335204 kubelet[2523]: I0123 23:53:51.335121 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61471a5a-3646-4ffd-913d-c05fa79c7729-lib-modules\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335204 kubelet[2523]: I0123 23:53:51.335136 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61471a5a-3646-4ffd-913d-c05fa79c7729-tigera-ca-bundle\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335204 kubelet[2523]: I0123 23:53:51.335170 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61471a5a-3646-4ffd-913d-c05fa79c7729-var-lib-calico\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335322 kubelet[2523]: I0123 23:53:51.335185 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61471a5a-3646-4ffd-913d-c05fa79c7729-xtables-lock\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335322 kubelet[2523]: I0123 23:53:51.335202 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/61471a5a-3646-4ffd-913d-c05fa79c7729-flexvol-driver-host\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335322 kubelet[2523]: I0123 23:53:51.335219 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6cc706f1-f210-4e7b-b9e2-07fb02a22dce-socket-dir\") pod \"csi-node-driver-fc6m5\" (UID: \"6cc706f1-f210-4e7b-b9e2-07fb02a22dce\") " pod="calico-system/csi-node-driver-fc6m5" Jan 23 23:53:51.335322 kubelet[2523]: I0123 23:53:51.335233 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6cc706f1-f210-4e7b-b9e2-07fb02a22dce-varrun\") pod \"csi-node-driver-fc6m5\" (UID: \"6cc706f1-f210-4e7b-b9e2-07fb02a22dce\") " pod="calico-system/csi-node-driver-fc6m5" Jan 23 23:53:51.335322 kubelet[2523]: I0123 23:53:51.335248 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcdqn\" (UniqueName: \"kubernetes.io/projected/6cc706f1-f210-4e7b-b9e2-07fb02a22dce-kube-api-access-rcdqn\") pod \"csi-node-driver-fc6m5\" (UID: \"6cc706f1-f210-4e7b-b9e2-07fb02a22dce\") " pod="calico-system/csi-node-driver-fc6m5" Jan 23 23:53:51.335422 kubelet[2523]: I0123 23:53:51.335263 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bea5b43f-d3f4-4e81-968d-5ea668c5eb4c-xtables-lock\") pod \"kube-proxy-tkjzh\" (UID: \"bea5b43f-d3f4-4e81-968d-5ea668c5eb4c\") " pod="kube-system/kube-proxy-tkjzh" Jan 23 23:53:51.335422 kubelet[2523]: I0123 23:53:51.335279 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bea5b43f-d3f4-4e81-968d-5ea668c5eb4c-lib-modules\") pod \"kube-proxy-tkjzh\" (UID: \"bea5b43f-d3f4-4e81-968d-5ea668c5eb4c\") " pod="kube-system/kube-proxy-tkjzh" Jan 23 23:53:51.335422 kubelet[2523]: I0123 23:53:51.335294 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/61471a5a-3646-4ffd-913d-c05fa79c7729-cni-bin-dir\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335422 kubelet[2523]: I0123 23:53:51.335308 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/61471a5a-3646-4ffd-913d-c05fa79c7729-cni-log-dir\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335422 kubelet[2523]: I0123 23:53:51.335327 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/61471a5a-3646-4ffd-913d-c05fa79c7729-policysync\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335520 kubelet[2523]: I0123 23:53:51.335341 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/61471a5a-3646-4ffd-913d-c05fa79c7729-var-run-calico\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335520 kubelet[2523]: I0123 23:53:51.335359 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4qgr\" (UniqueName: \"kubernetes.io/projected/61471a5a-3646-4ffd-913d-c05fa79c7729-kube-api-access-p4qgr\") pod \"calico-node-lsl7k\" (UID: \"61471a5a-3646-4ffd-913d-c05fa79c7729\") " pod="calico-system/calico-node-lsl7k" Jan 23 23:53:51.335520 kubelet[2523]: I0123 23:53:51.335375 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6cc706f1-f210-4e7b-b9e2-07fb02a22dce-kubelet-dir\") pod \"csi-node-driver-fc6m5\" (UID: \"6cc706f1-f210-4e7b-b9e2-07fb02a22dce\") " pod="calico-system/csi-node-driver-fc6m5" Jan 23 23:53:51.335520 kubelet[2523]: I0123 23:53:51.335389 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bea5b43f-d3f4-4e81-968d-5ea668c5eb4c-kube-proxy\") pod \"kube-proxy-tkjzh\" (UID: \"bea5b43f-d3f4-4e81-968d-5ea668c5eb4c\") " pod="kube-system/kube-proxy-tkjzh" Jan 23 23:53:51.335520 kubelet[2523]: I0123 23:53:51.335403 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6cc706f1-f210-4e7b-b9e2-07fb02a22dce-registration-dir\") pod \"csi-node-driver-fc6m5\" (UID: \"6cc706f1-f210-4e7b-b9e2-07fb02a22dce\") " pod="calico-system/csi-node-driver-fc6m5" Jan 23 23:53:51.440858 kubelet[2523]: E0123 23:53:51.440155 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.441901 kubelet[2523]: W0123 23:53:51.440180 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.441960 kubelet[2523]: E0123 23:53:51.441930 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.443669 kubelet[2523]: E0123 23:53:51.443567 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.443669 kubelet[2523]: W0123 23:53:51.443585 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.443669 kubelet[2523]: E0123 23:53:51.443599 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.445006 kubelet[2523]: E0123 23:53:51.444870 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.445006 kubelet[2523]: W0123 23:53:51.444885 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.445225 kubelet[2523]: E0123 23:53:51.445134 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.445313 kubelet[2523]: E0123 23:53:51.445303 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.445403 kubelet[2523]: W0123 23:53:51.445362 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.445544 kubelet[2523]: E0123 23:53:51.445496 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.445710 kubelet[2523]: E0123 23:53:51.445660 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.445710 kubelet[2523]: W0123 23:53:51.445670 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.445828 kubelet[2523]: E0123 23:53:51.445749 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.446072 kubelet[2523]: E0123 23:53:51.445991 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.446072 kubelet[2523]: W0123 23:53:51.446003 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.446217 kubelet[2523]: E0123 23:53:51.446170 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.446352 kubelet[2523]: E0123 23:53:51.446273 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.446352 kubelet[2523]: W0123 23:53:51.446282 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.446632 kubelet[2523]: E0123 23:53:51.446485 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.446632 kubelet[2523]: E0123 23:53:51.446588 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.446632 kubelet[2523]: W0123 23:53:51.446596 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.446794 kubelet[2523]: E0123 23:53:51.446736 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.446928 kubelet[2523]: E0123 23:53:51.446886 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.446928 kubelet[2523]: W0123 23:53:51.446895 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.447055 kubelet[2523]: E0123 23:53:51.446991 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.447149 kubelet[2523]: E0123 23:53:51.447141 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.447203 kubelet[2523]: W0123 23:53:51.447194 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.447333 kubelet[2523]: E0123 23:53:51.447309 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.447333 kubelet[2523]: E0123 23:53:51.447371 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.447333 kubelet[2523]: W0123 23:53:51.447381 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.447606 kubelet[2523]: E0123 23:53:51.447595 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.447692 kubelet[2523]: E0123 23:53:51.447685 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.447745 kubelet[2523]: W0123 23:53:51.447728 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.447850 kubelet[2523]: E0123 23:53:51.447808 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.448060 kubelet[2523]: E0123 23:53:51.448010 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.448060 kubelet[2523]: W0123 23:53:51.448021 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.448166 kubelet[2523]: E0123 23:53:51.448099 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.448397 kubelet[2523]: E0123 23:53:51.448315 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.448397 kubelet[2523]: W0123 23:53:51.448326 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.448510 kubelet[2523]: E0123 23:53:51.448479 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.448695 kubelet[2523]: E0123 23:53:51.448599 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.448695 kubelet[2523]: W0123 23:53:51.448608 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.448784 kubelet[2523]: E0123 23:53:51.448773 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.448873 kubelet[2523]: E0123 23:53:51.448866 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.448925 kubelet[2523]: W0123 23:53:51.448911 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.449071 kubelet[2523]: E0123 23:53:51.449034 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.449182 kubelet[2523]: E0123 23:53:51.449162 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.449182 kubelet[2523]: W0123 23:53:51.449171 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.449346 kubelet[2523]: E0123 23:53:51.449313 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.449462 kubelet[2523]: E0123 23:53:51.449442 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.449462 kubelet[2523]: W0123 23:53:51.449452 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.449647 kubelet[2523]: E0123 23:53:51.449625 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.449647 kubelet[2523]: E0123 23:53:51.449688 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.449647 kubelet[2523]: W0123 23:53:51.449696 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.449888 kubelet[2523]: E0123 23:53:51.449877 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.449994 kubelet[2523]: E0123 23:53:51.449975 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.449994 kubelet[2523]: W0123 23:53:51.449983 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.450180 kubelet[2523]: E0123 23:53:51.450126 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.450292 kubelet[2523]: E0123 23:53:51.450272 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.450292 kubelet[2523]: W0123 23:53:51.450282 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.450457 kubelet[2523]: E0123 23:53:51.450418 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.450577 kubelet[2523]: E0123 23:53:51.450557 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.450577 kubelet[2523]: W0123 23:53:51.450566 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.450851 kubelet[2523]: E0123 23:53:51.450722 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.450851 kubelet[2523]: E0123 23:53:51.450785 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.450851 kubelet[2523]: W0123 23:53:51.450792 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.451025 kubelet[2523]: E0123 23:53:51.450962 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.451096 kubelet[2523]: E0123 23:53:51.451088 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.451153 kubelet[2523]: W0123 23:53:51.451143 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.451291 kubelet[2523]: E0123 23:53:51.451261 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.451467 kubelet[2523]: E0123 23:53:51.451443 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.451467 kubelet[2523]: W0123 23:53:51.451453 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.451587 kubelet[2523]: E0123 23:53:51.451546 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.451871 kubelet[2523]: E0123 23:53:51.451779 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.451871 kubelet[2523]: W0123 23:53:51.451790 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.452010 kubelet[2523]: E0123 23:53:51.451973 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.452103 kubelet[2523]: E0123 23:53:51.452084 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.452103 kubelet[2523]: W0123 23:53:51.452093 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.452289 kubelet[2523]: E0123 23:53:51.452230 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.452394 kubelet[2523]: E0123 23:53:51.452385 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.452500 kubelet[2523]: W0123 23:53:51.452442 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.452628 kubelet[2523]: E0123 23:53:51.452614 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.453119 kubelet[2523]: E0123 23:53:51.453025 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.453119 kubelet[2523]: W0123 23:53:51.453036 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.453291 kubelet[2523]: E0123 23:53:51.453239 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.453291 kubelet[2523]: W0123 23:53:51.453249 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.453291 kubelet[2523]: E0123 23:53:51.453260 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.453291 kubelet[2523]: E0123 23:53:51.453278 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.461716 kubelet[2523]: E0123 23:53:51.461601 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.461716 kubelet[2523]: W0123 23:53:51.461622 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.461716 kubelet[2523]: E0123 23:53:51.461643 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.462357 kubelet[2523]: E0123 23:53:51.462344 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.462640 kubelet[2523]: W0123 23:53:51.462426 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.462640 kubelet[2523]: E0123 23:53:51.462443 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.467700 kubelet[2523]: E0123 23:53:51.467677 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:53:51.467700 kubelet[2523]: W0123 23:53:51.467696 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:53:51.467808 kubelet[2523]: E0123 23:53:51.467712 2523 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:53:51.620559 containerd[1826]: time="2026-01-23T23:53:51.620267897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tkjzh,Uid:bea5b43f-d3f4-4e81-968d-5ea668c5eb4c,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:51.624944 containerd[1826]: time="2026-01-23T23:53:51.624910877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lsl7k,Uid:61471a5a-3646-4ffd-913d-c05fa79c7729,Namespace:calico-system,Attempt:0,}" Jan 23 23:53:52.305242 kubelet[2523]: E0123 23:53:52.305207 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:53:52.392566 containerd[1826]: time="2026-01-23T23:53:52.391792297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:52.397282 containerd[1826]: time="2026-01-23T23:53:52.397234600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:52.399910 containerd[1826]: time="2026-01-23T23:53:52.399881171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:53:52.403299 containerd[1826]: time="2026-01-23T23:53:52.403269026Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:52.406409 containerd[1826]: time="2026-01-23T23:53:52.406378799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:53:52.410703 containerd[1826]: time="2026-01-23T23:53:52.410648297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:52.411916 containerd[1826]: time="2026-01-23T23:53:52.411126179Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 790.776162ms" Jan 23 23:53:52.411916 containerd[1826]: time="2026-01-23T23:53:52.411788302Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 786.810625ms" Jan 23 23:53:52.441929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount451207819.mount: Deactivated successfully. Jan 23 23:53:52.911892 containerd[1826]: time="2026-01-23T23:53:52.911807227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:52.912482 containerd[1826]: time="2026-01-23T23:53:52.911860228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:52.912768 containerd[1826]: time="2026-01-23T23:53:52.912716271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:52.912852 containerd[1826]: time="2026-01-23T23:53:52.912758711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:52.912913 containerd[1826]: time="2026-01-23T23:53:52.912847112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:52.913317 containerd[1826]: time="2026-01-23T23:53:52.913281474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:52.914351 containerd[1826]: time="2026-01-23T23:53:52.914306678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:52.914611 containerd[1826]: time="2026-01-23T23:53:52.914570799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:53.227949 containerd[1826]: time="2026-01-23T23:53:53.227833411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lsl7k,Uid:61471a5a-3646-4ffd-913d-c05fa79c7729,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3cd9b0a714b2767085aaae316895f66a3cf6cdaaaf6808d2280d89383dda431\"" Jan 23 23:53:53.230937 containerd[1826]: time="2026-01-23T23:53:53.230894024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tkjzh,Uid:bea5b43f-d3f4-4e81-968d-5ea668c5eb4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4a9f20d787b8e3e0a6b30ff7e695e388f89fda2be5287ea4676693494d6f806\"" Jan 23 23:53:53.232540 containerd[1826]: time="2026-01-23T23:53:53.232503231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 23:53:53.306042 kubelet[2523]: E0123 23:53:53.305999 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:53:53.378328 kubelet[2523]: E0123 23:53:53.378284 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:53:54.306788 kubelet[2523]: E0123 23:53:54.306653 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:53:54.309671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867882688.mount: Deactivated successfully. Jan 23 23:53:54.405174 containerd[1826]: time="2026-01-23T23:53:54.404652931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:54.407437 containerd[1826]: time="2026-01-23T23:53:54.407400256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Jan 23 23:53:54.410263 containerd[1826]: time="2026-01-23T23:53:54.410222461Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:54.414171 containerd[1826]: time="2026-01-23T23:53:54.414121828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:54.414893 containerd[1826]: time="2026-01-23T23:53:54.414731909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.182172318s" Jan 23 23:53:54.414893 containerd[1826]: time="2026-01-23T23:53:54.414760149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 23:53:54.416351 containerd[1826]: time="2026-01-23T23:53:54.416285831Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 23:53:54.417426 containerd[1826]: time="2026-01-23T23:53:54.417397073Z" level=info msg="CreateContainer within sandbox \"b3cd9b0a714b2767085aaae316895f66a3cf6cdaaaf6808d2280d89383dda431\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 23:53:54.451540 containerd[1826]: time="2026-01-23T23:53:54.451483733Z" level=info msg="CreateContainer within sandbox \"b3cd9b0a714b2767085aaae316895f66a3cf6cdaaaf6808d2280d89383dda431\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"baf5b7454d5c7858090d92b8453ca172414940fad240977cb632ee0850a0f8d1\"" Jan 23 23:53:54.452306 containerd[1826]: time="2026-01-23T23:53:54.452282815Z" level=info msg="StartContainer for \"baf5b7454d5c7858090d92b8453ca172414940fad240977cb632ee0850a0f8d1\"" Jan 23 23:53:54.513188 containerd[1826]: time="2026-01-23T23:53:54.513050762Z" level=info msg="StartContainer for \"baf5b7454d5c7858090d92b8453ca172414940fad240977cb632ee0850a0f8d1\" returns successfully" Jan 23 23:53:54.541636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baf5b7454d5c7858090d92b8453ca172414940fad240977cb632ee0850a0f8d1-rootfs.mount: Deactivated successfully. Jan 23 23:53:54.623411 containerd[1826]: time="2026-01-23T23:53:54.623216555Z" level=info msg="shim disconnected" id=baf5b7454d5c7858090d92b8453ca172414940fad240977cb632ee0850a0f8d1 namespace=k8s.io Jan 23 23:53:54.623411 containerd[1826]: time="2026-01-23T23:53:54.623265555Z" level=warning msg="cleaning up after shim disconnected" id=baf5b7454d5c7858090d92b8453ca172414940fad240977cb632ee0850a0f8d1 namespace=k8s.io Jan 23 23:53:54.623411 containerd[1826]: time="2026-01-23T23:53:54.623273036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:53:55.307768 kubelet[2523]: E0123 23:53:55.307735 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:53:55.377660 kubelet[2523]: E0123 23:53:55.377320 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:53:55.463981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489693000.mount: Deactivated successfully. Jan 23 23:53:55.751574 containerd[1826]: time="2026-01-23T23:53:55.751296019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:55.754056 containerd[1826]: time="2026-01-23T23:53:55.753877344Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 23:53:55.756843 containerd[1826]: time="2026-01-23T23:53:55.756769589Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:55.762071 containerd[1826]: time="2026-01-23T23:53:55.762026238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:55.764788 containerd[1826]: time="2026-01-23T23:53:55.763989482Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.34767573s" Jan 23 23:53:55.764788 containerd[1826]: time="2026-01-23T23:53:55.764026762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 23:53:55.767619 containerd[1826]: time="2026-01-23T23:53:55.767593888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 23:53:55.768834 containerd[1826]: time="2026-01-23T23:53:55.768800730Z" level=info msg="CreateContainer within sandbox \"d4a9f20d787b8e3e0a6b30ff7e695e388f89fda2be5287ea4676693494d6f806\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:53:55.802316 containerd[1826]: time="2026-01-23T23:53:55.802270069Z" level=info msg="CreateContainer within sandbox \"d4a9f20d787b8e3e0a6b30ff7e695e388f89fda2be5287ea4676693494d6f806\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e5e5fa10dc0d4b77a9ad271f8b071513f364abab62d5427dc3dc348c8fa544be\"" Jan 23 23:53:55.803079 containerd[1826]: time="2026-01-23T23:53:55.803048910Z" level=info msg="StartContainer for \"e5e5fa10dc0d4b77a9ad271f8b071513f364abab62d5427dc3dc348c8fa544be\"" Jan 23 23:53:55.859139 containerd[1826]: time="2026-01-23T23:53:55.859094129Z" level=info msg="StartContainer for \"e5e5fa10dc0d4b77a9ad271f8b071513f364abab62d5427dc3dc348c8fa544be\" returns successfully" Jan 23 23:53:56.308811 kubelet[2523]: E0123 23:53:56.308760 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:53:57.309854 kubelet[2523]: E0123 23:53:57.309819 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:53:57.378246 kubelet[2523]: E0123 23:53:57.378201 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:53:57.998447 containerd[1826]: time="2026-01-23T23:53:57.997729410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:58.000190 containerd[1826]: time="2026-01-23T23:53:58.000159534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 23:53:58.003067 containerd[1826]: time="2026-01-23T23:53:58.003012859Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:58.007516 containerd[1826]: time="2026-01-23T23:53:58.007301787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:58.008068 containerd[1826]: time="2026-01-23T23:53:58.008038708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.24028762s" Jan 23 23:53:58.008118 containerd[1826]: time="2026-01-23T23:53:58.008068708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 23:53:58.010702 containerd[1826]: time="2026-01-23T23:53:58.010501393Z" level=info msg="CreateContainer within sandbox \"b3cd9b0a714b2767085aaae316895f66a3cf6cdaaaf6808d2280d89383dda431\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:53:58.044178 containerd[1826]: time="2026-01-23T23:53:58.044133652Z" level=info msg="CreateContainer within sandbox \"b3cd9b0a714b2767085aaae316895f66a3cf6cdaaaf6808d2280d89383dda431\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1015ae76e2b20d19b62c3522c44e87db9880a39ff61bf1212b5756a815e92aef\"" Jan 23 23:53:58.044928 containerd[1826]: time="2026-01-23T23:53:58.044890413Z" level=info msg="StartContainer for \"1015ae76e2b20d19b62c3522c44e87db9880a39ff61bf1212b5756a815e92aef\"" Jan 23 23:53:58.099434 containerd[1826]: time="2026-01-23T23:53:58.098758028Z" level=info msg="StartContainer for \"1015ae76e2b20d19b62c3522c44e87db9880a39ff61bf1212b5756a815e92aef\" returns successfully" Jan 23 23:53:58.311015 kubelet[2523]: E0123 23:53:58.310909 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:53:58.419982 kubelet[2523]: I0123 23:53:58.419918 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tkjzh" podStartSLOduration=5.884854295 podStartE2EDuration="8.419903113s" podCreationTimestamp="2026-01-23 23:53:50 +0000 UTC" firstStartedPulling="2026-01-23 23:53:53.232160389 +0000 UTC m=+3.582818757" lastFinishedPulling="2026-01-23 23:53:55.767209167 +0000 UTC m=+6.117867575" observedRunningTime="2026-01-23 23:53:56.412716583 +0000 UTC m=+6.763374951" watchObservedRunningTime="2026-01-23 23:53:58.419903113 +0000 UTC m=+8.770561481" Jan 23 23:53:59.311517 kubelet[2523]: E0123 23:53:59.311476 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:53:59.356579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1015ae76e2b20d19b62c3522c44e87db9880a39ff61bf1212b5756a815e92aef-rootfs.mount: Deactivated successfully. Jan 23 23:53:59.376042 kubelet[2523]: I0123 23:53:59.376012 2523 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:53:59.381343 containerd[1826]: time="2026-01-23T23:53:59.381294042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fc6m5,Uid:6cc706f1-f210-4e7b-b9e2-07fb02a22dce,Namespace:calico-system,Attempt:0,}" Jan 23 23:53:59.538475 containerd[1826]: time="2026-01-23T23:53:59.538419886Z" level=error msg="Failed to destroy network for sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:53:59.540064 containerd[1826]: time="2026-01-23T23:53:59.538791567Z" level=error msg="encountered an error cleaning up failed sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:53:59.540064 containerd[1826]: time="2026-01-23T23:53:59.538841207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fc6m5,Uid:6cc706f1-f210-4e7b-b9e2-07fb02a22dce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:53:59.540358 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca-shm.mount: Deactivated successfully. Jan 23 23:53:59.541108 kubelet[2523]: E0123 23:53:59.540744 2523 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:53:59.541108 kubelet[2523]: E0123 23:53:59.540818 2523 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fc6m5" Jan 23 23:53:59.541108 kubelet[2523]: E0123 23:53:59.540841 2523 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fc6m5" Jan 23 23:53:59.541243 kubelet[2523]: E0123 23:53:59.540889 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fc6m5_calico-system(6cc706f1-f210-4e7b-b9e2-07fb02a22dce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fc6m5_calico-system(6cc706f1-f210-4e7b-b9e2-07fb02a22dce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:54:00.312211 kubelet[2523]: E0123 23:54:00.312166 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:00.378561 containerd[1826]: time="2026-01-23T23:54:00.378373737Z" level=error msg="collecting metrics for 1015ae76e2b20d19b62c3522c44e87db9880a39ff61bf1212b5756a815e92aef" error="cgroups: cgroup deleted: unknown" Jan 23 23:54:00.402968 kubelet[2523]: I0123 23:54:00.402933 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:00.403996 containerd[1826]: time="2026-01-23T23:54:00.403632149Z" level=info msg="StopPodSandbox for \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\"" Jan 23 23:54:00.403996 containerd[1826]: time="2026-01-23T23:54:00.403780750Z" level=info msg="Ensure that sandbox 6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca in task-service has been cleanup successfully" Jan 23 23:54:00.857800 containerd[1826]: time="2026-01-23T23:54:00.857761126Z" level=error msg="StopPodSandbox for \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\" failed" error="failed to destroy network for sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:54:00.858682 kubelet[2523]: E0123 23:54:00.858495 2523 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:00.858682 kubelet[2523]: E0123 23:54:00.858564 2523 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca"} Jan 23 23:54:00.858682 kubelet[2523]: E0123 23:54:00.858617 2523 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6cc706f1-f210-4e7b-b9e2-07fb02a22dce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:54:00.858682 kubelet[2523]: E0123 23:54:00.858638 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6cc706f1-f210-4e7b-b9e2-07fb02a22dce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:54:00.870841 containerd[1826]: time="2026-01-23T23:54:00.870719272Z" level=info msg="shim disconnected" id=1015ae76e2b20d19b62c3522c44e87db9880a39ff61bf1212b5756a815e92aef namespace=k8s.io Jan 23 23:54:00.870841 containerd[1826]: time="2026-01-23T23:54:00.870770392Z" level=warning msg="cleaning up after shim disconnected" id=1015ae76e2b20d19b62c3522c44e87db9880a39ff61bf1212b5756a815e92aef namespace=k8s.io Jan 23 23:54:00.870841 containerd[1826]: time="2026-01-23T23:54:00.870789552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:54:01.120248 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 23 23:54:01.313156 kubelet[2523]: E0123 23:54:01.313117 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:01.407579 containerd[1826]: time="2026-01-23T23:54:01.407414778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 23:54:01.487405 waagent[2022]: 2026-01-23T23:54:01.486581Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 23:54:01.495568 waagent[2022]: 2026-01-23T23:54:01.495007Z INFO ExtHandler Jan 23 23:54:01.495568 waagent[2022]: 2026-01-23T23:54:01.495125Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 907bc6df-ca3e-47d7-a338-7cf7b41f7937 eTag: 10937504103606785313 source: Fabric] Jan 23 23:54:01.495568 waagent[2022]: 2026-01-23T23:54:01.495481Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 23:54:01.496183 waagent[2022]: 2026-01-23T23:54:01.496131Z INFO ExtHandler Jan 23 23:54:01.496249 waagent[2022]: 2026-01-23T23:54:01.496221Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 23:54:01.592684 waagent[2022]: 2026-01-23T23:54:01.592637Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 23:54:01.653512 waagent[2022]: 2026-01-23T23:54:01.653421Z INFO ExtHandler Downloaded certificate {'thumbprint': '75BC33B7ED8A10BE1E8AB1BCDA0D003C9E4D156B', 'hasPrivateKey': True} Jan 23 23:54:01.654051 waagent[2022]: 2026-01-23T23:54:01.654005Z INFO ExtHandler Fetch goal state completed Jan 23 23:54:01.654415 waagent[2022]: 2026-01-23T23:54:01.654376Z INFO ExtHandler ExtHandler Jan 23 23:54:01.654483 waagent[2022]: 2026-01-23T23:54:01.654455Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 63c4d5b2-594f-4c1d-ac6a-97c97c939316 correlation 33acef35-fefb-4d99-b1d6-26b4450da3aa created: 2026-01-23T23:53:51.789597Z] Jan 23 23:54:01.654878 waagent[2022]: 2026-01-23T23:54:01.654832Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 23:54:01.655377 waagent[2022]: 2026-01-23T23:54:01.655341Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 23:54:02.313988 kubelet[2523]: E0123 23:54:02.313946 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:03.314933 kubelet[2523]: E0123 23:54:03.314886 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:04.315717 kubelet[2523]: E0123 23:54:04.315661 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:04.628646 update_engine[1808]: I20260123 23:54:04.628566 1808 update_attempter.cc:509] Updating boot flags... Jan 23 23:54:04.702674 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3062) Jan 23 23:54:04.813598 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3066) Jan 23 23:54:05.017073 kubelet[2523]: I0123 23:54:05.016980 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5t2g\" (UniqueName: \"kubernetes.io/projected/4b715e57-8a0e-4615-96e0-6b93e24a6958-kube-api-access-n5t2g\") pod \"nginx-deployment-7fcdb87857-cp47w\" (UID: \"4b715e57-8a0e-4615-96e0-6b93e24a6958\") " pod="default/nginx-deployment-7fcdb87857-cp47w" Jan 23 23:54:05.227007 containerd[1826]: time="2026-01-23T23:54:05.226967891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-cp47w,Uid:4b715e57-8a0e-4615-96e0-6b93e24a6958,Namespace:default,Attempt:0,}" Jan 23 23:54:05.316346 kubelet[2523]: E0123 23:54:05.315868 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:05.323127 containerd[1826]: time="2026-01-23T23:54:05.323072929Z" level=error msg="Failed to destroy network for sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:54:05.324966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64-shm.mount: Deactivated successfully. Jan 23 23:54:05.326484 containerd[1826]: time="2026-01-23T23:54:05.326440816Z" level=error msg="encountered an error cleaning up failed sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:54:05.326706 containerd[1826]: time="2026-01-23T23:54:05.326676257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-cp47w,Uid:4b715e57-8a0e-4615-96e0-6b93e24a6958,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:54:05.326895 kubelet[2523]: E0123 23:54:05.326864 2523 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:54:05.326944 kubelet[2523]: E0123 23:54:05.326923 2523 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-cp47w" Jan 23 23:54:05.326972 kubelet[2523]: E0123 23:54:05.326945 2523 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-cp47w" Jan 23 23:54:05.327013 kubelet[2523]: E0123 23:54:05.326985 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-cp47w_default(4b715e57-8a0e-4615-96e0-6b93e24a6958)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-cp47w_default(4b715e57-8a0e-4615-96e0-6b93e24a6958)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-cp47w" podUID="4b715e57-8a0e-4615-96e0-6b93e24a6958" Jan 23 23:54:05.415361 kubelet[2523]: I0123 23:54:05.415334 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:05.416329 containerd[1826]: time="2026-01-23T23:54:05.416301802Z" level=info msg="StopPodSandbox for \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\"" Jan 23 23:54:05.416556 containerd[1826]: time="2026-01-23T23:54:05.416523882Z" level=info msg="Ensure that sandbox 96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64 in task-service has been cleanup successfully" Jan 23 23:54:05.447690 containerd[1826]: time="2026-01-23T23:54:05.447627826Z" level=error msg="StopPodSandbox for \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\" failed" error="failed to destroy network for sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:54:05.448036 kubelet[2523]: E0123 23:54:05.447826 2523 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:05.448036 kubelet[2523]: E0123 23:54:05.447887 2523 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64"} Jan 23 23:54:05.448036 kubelet[2523]: E0123 23:54:05.447940 2523 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b715e57-8a0e-4615-96e0-6b93e24a6958\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:54:05.448036 kubelet[2523]: E0123 23:54:05.447964 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b715e57-8a0e-4615-96e0-6b93e24a6958\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-cp47w" podUID="4b715e57-8a0e-4615-96e0-6b93e24a6958" Jan 23 23:54:05.646425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346323086.mount: Deactivated successfully. Jan 23 23:54:05.845970 containerd[1826]: time="2026-01-23T23:54:05.845920407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:05.848350 containerd[1826]: time="2026-01-23T23:54:05.848318732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 23:54:05.851327 containerd[1826]: time="2026-01-23T23:54:05.851283938Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:05.855098 containerd[1826]: time="2026-01-23T23:54:05.855056466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:05.855963 containerd[1826]: time="2026-01-23T23:54:05.855544627Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.448083528s" Jan 23 23:54:05.855963 containerd[1826]: time="2026-01-23T23:54:05.855575227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 23:54:05.861828 containerd[1826]: time="2026-01-23T23:54:05.861795280Z" level=info msg="CreateContainer within sandbox \"b3cd9b0a714b2767085aaae316895f66a3cf6cdaaaf6808d2280d89383dda431\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 23:54:05.895786 containerd[1826]: time="2026-01-23T23:54:05.895747030Z" level=info msg="CreateContainer within sandbox \"b3cd9b0a714b2767085aaae316895f66a3cf6cdaaaf6808d2280d89383dda431\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d64d21106420f87130490afead41fedd8b69a31c04a293861f4d9e4377959d9f\"" Jan 23 23:54:05.896600 containerd[1826]: time="2026-01-23T23:54:05.896227871Z" level=info msg="StartContainer for \"d64d21106420f87130490afead41fedd8b69a31c04a293861f4d9e4377959d9f\"" Jan 23 23:54:05.948380 containerd[1826]: time="2026-01-23T23:54:05.948253898Z" level=info msg="StartContainer for \"d64d21106420f87130490afead41fedd8b69a31c04a293861f4d9e4377959d9f\" returns successfully" Jan 23 23:54:06.172168 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 23:54:06.172331 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 23:54:06.316694 kubelet[2523]: E0123 23:54:06.316663 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:07.317482 kubelet[2523]: E0123 23:54:07.317427 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:07.692626 kernel: bpftool[3344]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 23 23:54:07.897130 systemd-networkd[1408]: vxlan.calico: Link UP Jan 23 23:54:07.897139 systemd-networkd[1408]: vxlan.calico: Gained carrier Jan 23 23:54:08.318134 kubelet[2523]: E0123 23:54:08.318094 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:09.013640 systemd-networkd[1408]: vxlan.calico: Gained IPv6LL Jan 23 23:54:09.318942 kubelet[2523]: E0123 23:54:09.318831 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:10.303724 kubelet[2523]: E0123 23:54:10.303683 2523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:10.318949 kubelet[2523]: E0123 23:54:10.318927 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:11.320040 kubelet[2523]: E0123 23:54:11.320002 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:12.320421 kubelet[2523]: E0123 23:54:12.320386 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:13.320804 kubelet[2523]: E0123 23:54:13.320759 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:14.321428 kubelet[2523]: E0123 23:54:14.321393 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:14.594283 kubelet[2523]: I0123 23:54:14.593849 2523 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 23:54:14.684021 systemd[1]: run-containerd-runc-k8s.io-d64d21106420f87130490afead41fedd8b69a31c04a293861f4d9e4377959d9f-runc.E7Tt9A.mount: Deactivated successfully. Jan 23 23:54:14.687825 kubelet[2523]: I0123 23:54:14.685425 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lsl7k" podStartSLOduration=12.061052782 podStartE2EDuration="24.685408942s" podCreationTimestamp="2026-01-23 23:53:50 +0000 UTC" firstStartedPulling="2026-01-23 23:53:53.231795988 +0000 UTC m=+3.582454356" lastFinishedPulling="2026-01-23 23:54:05.856152148 +0000 UTC m=+16.206810516" observedRunningTime="2026-01-23 23:54:06.447449047 +0000 UTC m=+16.798107415" watchObservedRunningTime="2026-01-23 23:54:14.685408942 +0000 UTC m=+25.036067310" Jan 23 23:54:15.321684 kubelet[2523]: E0123 23:54:15.321637 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:15.378044 containerd[1826]: time="2026-01-23T23:54:15.377800039Z" level=info msg="StopPodSandbox for \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\"" Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.421 [INFO][3472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.421 [INFO][3472] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" iface="eth0" netns="/var/run/netns/cni-8a4040ae-de57-c26e-6657-071a48a2cfb9" Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.422 [INFO][3472] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" iface="eth0" netns="/var/run/netns/cni-8a4040ae-de57-c26e-6657-071a48a2cfb9" Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.422 [INFO][3472] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" iface="eth0" netns="/var/run/netns/cni-8a4040ae-de57-c26e-6657-071a48a2cfb9" Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.422 [INFO][3472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.422 [INFO][3472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.439 [INFO][3479] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" HandleID="k8s-pod-network.6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.439 [INFO][3479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.439 [INFO][3479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.449 [WARNING][3479] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" HandleID="k8s-pod-network.6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.449 [INFO][3479] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" HandleID="k8s-pod-network.6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.451 [INFO][3479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:54:15.455947 containerd[1826]: 2026-01-23 23:54:15.453 [INFO][3472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:15.456842 containerd[1826]: time="2026-01-23T23:54:15.456053034Z" level=info msg="TearDown network for sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\" successfully" Jan 23 23:54:15.456842 containerd[1826]: time="2026-01-23T23:54:15.456081274Z" level=info msg="StopPodSandbox for \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\" returns successfully" Jan 23 23:54:15.459752 containerd[1826]: time="2026-01-23T23:54:15.457756718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fc6m5,Uid:6cc706f1-f210-4e7b-b9e2-07fb02a22dce,Namespace:calico-system,Attempt:1,}" Jan 23 23:54:15.459256 systemd[1]: run-netns-cni\x2d8a4040ae\x2dde57\x2dc26e\x2d6657\x2d071a48a2cfb9.mount: Deactivated successfully. Jan 23 23:54:15.584747 systemd-networkd[1408]: califfd373037f4: Link UP Jan 23 23:54:15.584915 systemd-networkd[1408]: califfd373037f4: Gained carrier Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.517 [INFO][3485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.20.35-k8s-csi--node--driver--fc6m5-eth0 csi-node-driver- calico-system 6cc706f1-f210-4e7b-b9e2-07fb02a22dce 1445 0 2026-01-23 23:53:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.20.35 csi-node-driver-fc6m5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califfd373037f4 [] [] }} ContainerID="c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" Namespace="calico-system" Pod="csi-node-driver-fc6m5" WorkloadEndpoint="10.200.20.35-k8s-csi--node--driver--fc6m5-" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.517 [INFO][3485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" Namespace="calico-system" Pod="csi-node-driver-fc6m5" WorkloadEndpoint="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.539 [INFO][3497] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" HandleID="k8s-pod-network.c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.539 [INFO][3497] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" HandleID="k8s-pod-network.c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.20.35", "pod":"csi-node-driver-fc6m5", "timestamp":"2026-01-23 23:54:15.539225159 +0000 UTC"}, Hostname:"10.200.20.35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.539 [INFO][3497] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.539 [INFO][3497] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.539 [INFO][3497] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.20.35' Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.550 [INFO][3497] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" host="10.200.20.35" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.554 [INFO][3497] ipam/ipam.go 394: Looking up existing affinities for host host="10.200.20.35" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.558 [INFO][3497] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="10.200.20.35" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.560 [INFO][3497] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="10.200.20.35" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.563 [INFO][3497] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="10.200.20.35" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.563 [INFO][3497] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" host="10.200.20.35" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.564 [INFO][3497] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69 Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.572 [INFO][3497] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" host="10.200.20.35" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.577 [INFO][3497] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.90.1/26] block=192.168.90.0/26 handle="k8s-pod-network.c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" host="10.200.20.35" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.577 [INFO][3497] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.1/26] handle="k8s-pod-network.c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" host="10.200.20.35" Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.577 [INFO][3497] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:54:15.599100 containerd[1826]: 2026-01-23 23:54:15.577 [INFO][3497] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.90.1/26] IPv6=[] ContainerID="c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" HandleID="k8s-pod-network.c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:15.599902 containerd[1826]: 2026-01-23 23:54:15.579 [INFO][3485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" Namespace="calico-system" Pod="csi-node-driver-fc6m5" WorkloadEndpoint="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-csi--node--driver--fc6m5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6cc706f1-f210-4e7b-b9e2-07fb02a22dce", ResourceVersion:"1445", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"", Pod:"csi-node-driver-fc6m5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfd373037f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:54:15.599902 containerd[1826]: 2026-01-23 23:54:15.580 [INFO][3485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.1/32] ContainerID="c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" Namespace="calico-system" Pod="csi-node-driver-fc6m5" WorkloadEndpoint="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:15.599902 containerd[1826]: 2026-01-23 23:54:15.580 [INFO][3485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfd373037f4 ContainerID="c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" Namespace="calico-system" Pod="csi-node-driver-fc6m5" WorkloadEndpoint="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:15.599902 containerd[1826]: 2026-01-23 23:54:15.583 [INFO][3485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" Namespace="calico-system" Pod="csi-node-driver-fc6m5" WorkloadEndpoint="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:15.599902 containerd[1826]: 2026-01-23 23:54:15.585 [INFO][3485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" Namespace="calico-system" Pod="csi-node-driver-fc6m5" WorkloadEndpoint="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-csi--node--driver--fc6m5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6cc706f1-f210-4e7b-b9e2-07fb02a22dce", ResourceVersion:"1445", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69", Pod:"csi-node-driver-fc6m5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfd373037f4", MAC:"22:a3:5d:a4:ed:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:54:15.599902 containerd[1826]: 2026-01-23 23:54:15.597 [INFO][3485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69" Namespace="calico-system" Pod="csi-node-driver-fc6m5" WorkloadEndpoint="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:15.622262 containerd[1826]: time="2026-01-23T23:54:15.622143244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:15.622262 containerd[1826]: time="2026-01-23T23:54:15.622207724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:15.622798 containerd[1826]: time="2026-01-23T23:54:15.622227484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:15.622955 containerd[1826]: time="2026-01-23T23:54:15.622924006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:15.664178 containerd[1826]: time="2026-01-23T23:54:15.664133247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fc6m5,Uid:6cc706f1-f210-4e7b-b9e2-07fb02a22dce,Namespace:calico-system,Attempt:1,} returns sandbox id \"c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69\"" Jan 23 23:54:15.665799 containerd[1826]: time="2026-01-23T23:54:15.665776731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:54:15.987541 containerd[1826]: time="2026-01-23T23:54:15.987411609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:15.990829 containerd[1826]: time="2026-01-23T23:54:15.990735895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:54:15.990829 containerd[1826]: time="2026-01-23T23:54:15.990803615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:54:15.991002 kubelet[2523]: E0123 23:54:15.990965 2523 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:54:15.991045 kubelet[2523]: E0123 23:54:15.991016 2523 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:54:15.991200 kubelet[2523]: E0123 23:54:15.991151 2523 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcdqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fc6m5_calico-system(6cc706f1-f210-4e7b-b9e2-07fb02a22dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:15.993206 containerd[1826]: time="2026-01-23T23:54:15.993151900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:54:16.259363 containerd[1826]: time="2026-01-23T23:54:16.259255348Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:16.262751 containerd[1826]: time="2026-01-23T23:54:16.262653195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:54:16.262751 containerd[1826]: time="2026-01-23T23:54:16.262713635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:54:16.262882 kubelet[2523]: E0123 23:54:16.262839 2523 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:54:16.262919 kubelet[2523]: E0123 23:54:16.262882 2523 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:54:16.263031 kubelet[2523]: E0123 23:54:16.262991 2523 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcdqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fc6m5_calico-system(6cc706f1-f210-4e7b-b9e2-07fb02a22dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:16.264358 kubelet[2523]: E0123 23:54:16.264302 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:54:16.322591 kubelet[2523]: E0123 23:54:16.322558 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:16.438506 kubelet[2523]: E0123 23:54:16.438462 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:54:17.013786 systemd-networkd[1408]: califfd373037f4: Gained IPv6LL Jan 23 23:54:17.323267 kubelet[2523]: E0123 23:54:17.323227 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:17.440390 kubelet[2523]: E0123 23:54:17.440351 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:54:18.323954 kubelet[2523]: E0123 23:54:18.323907 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:19.324910 kubelet[2523]: E0123 23:54:19.324875 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:20.325870 kubelet[2523]: E0123 23:54:20.325827 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:20.378845 containerd[1826]: time="2026-01-23T23:54:20.378387962Z" level=info msg="StopPodSandbox for \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\"" Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.419 [INFO][3574] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.419 [INFO][3574] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" iface="eth0" netns="/var/run/netns/cni-c0d76c1a-dd62-de82-d75d-379f85cdf358" Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.420 [INFO][3574] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" iface="eth0" netns="/var/run/netns/cni-c0d76c1a-dd62-de82-d75d-379f85cdf358" Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.420 [INFO][3574] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" iface="eth0" netns="/var/run/netns/cni-c0d76c1a-dd62-de82-d75d-379f85cdf358" Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.420 [INFO][3574] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.420 [INFO][3574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.439 [INFO][3581] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" HandleID="k8s-pod-network.96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.439 [INFO][3581] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.439 [INFO][3581] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.448 [WARNING][3581] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" HandleID="k8s-pod-network.96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.448 [INFO][3581] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" HandleID="k8s-pod-network.96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.450 [INFO][3581] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:54:20.453316 containerd[1826]: 2026-01-23 23:54:20.451 [INFO][3574] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:20.455709 containerd[1826]: time="2026-01-23T23:54:20.455678235Z" level=info msg="TearDown network for sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\" successfully" Jan 23 23:54:20.455877 containerd[1826]: time="2026-01-23T23:54:20.455791395Z" level=info msg="StopPodSandbox for \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\" returns successfully" Jan 23 23:54:20.456200 systemd[1]: run-netns-cni\x2dc0d76c1a\x2ddd62\x2dde82\x2dd75d\x2d379f85cdf358.mount: Deactivated successfully. Jan 23 23:54:20.456725 containerd[1826]: time="2026-01-23T23:54:20.456456917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-cp47w,Uid:4b715e57-8a0e-4615-96e0-6b93e24a6958,Namespace:default,Attempt:1,}" Jan 23 23:54:20.599481 systemd-networkd[1408]: cali752f0773864: Link UP Jan 23 23:54:20.600564 systemd-networkd[1408]: cali752f0773864: Gained carrier Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.531 [INFO][3588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0 nginx-deployment-7fcdb87857- default 4b715e57-8a0e-4615-96e0-6b93e24a6958 1487 0 2026-01-23 23:54:04 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.20.35 nginx-deployment-7fcdb87857-cp47w eth0 default [] [] [kns.default ksa.default.default] cali752f0773864 [] [] }} ContainerID="e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" Namespace="default" Pod="nginx-deployment-7fcdb87857-cp47w" WorkloadEndpoint="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.531 [INFO][3588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" Namespace="default" Pod="nginx-deployment-7fcdb87857-cp47w" WorkloadEndpoint="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.552 [INFO][3599] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" HandleID="k8s-pod-network.e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.552 [INFO][3599] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" HandleID="k8s-pod-network.e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"default", "node":"10.200.20.35", "pod":"nginx-deployment-7fcdb87857-cp47w", "timestamp":"2026-01-23 23:54:20.552386147 +0000 UTC"}, Hostname:"10.200.20.35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.552 [INFO][3599] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.552 [INFO][3599] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.552 [INFO][3599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.20.35' Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.566 [INFO][3599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" host="10.200.20.35" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.571 [INFO][3599] ipam/ipam.go 394: Looking up existing affinities for host host="10.200.20.35" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.575 [INFO][3599] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="10.200.20.35" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.577 [INFO][3599] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="10.200.20.35" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.579 [INFO][3599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="10.200.20.35" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.579 [INFO][3599] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" host="10.200.20.35" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.581 [INFO][3599] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.586 [INFO][3599] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" host="10.200.20.35" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.592 [INFO][3599] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.90.2/26] block=192.168.90.0/26 handle="k8s-pod-network.e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" host="10.200.20.35" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.592 [INFO][3599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.2/26] handle="k8s-pod-network.e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" host="10.200.20.35" Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.592 [INFO][3599] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:54:20.613707 containerd[1826]: 2026-01-23 23:54:20.592 [INFO][3599] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.90.2/26] IPv6=[] ContainerID="e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" HandleID="k8s-pod-network.e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:20.614741 containerd[1826]: 2026-01-23 23:54:20.594 [INFO][3588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" Namespace="default" Pod="nginx-deployment-7fcdb87857-cp47w" WorkloadEndpoint="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"4b715e57-8a0e-4615-96e0-6b93e24a6958", ResourceVersion:"1487", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-cp47w", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali752f0773864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:54:20.614741 containerd[1826]: 2026-01-23 23:54:20.595 [INFO][3588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.2/32] ContainerID="e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" Namespace="default" Pod="nginx-deployment-7fcdb87857-cp47w" WorkloadEndpoint="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:20.614741 containerd[1826]: 2026-01-23 23:54:20.595 [INFO][3588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali752f0773864 ContainerID="e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" Namespace="default" Pod="nginx-deployment-7fcdb87857-cp47w" WorkloadEndpoint="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:20.614741 containerd[1826]: 2026-01-23 23:54:20.600 [INFO][3588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" Namespace="default" Pod="nginx-deployment-7fcdb87857-cp47w" WorkloadEndpoint="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:20.614741 containerd[1826]: 2026-01-23 23:54:20.601 [INFO][3588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" Namespace="default" Pod="nginx-deployment-7fcdb87857-cp47w" WorkloadEndpoint="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"4b715e57-8a0e-4615-96e0-6b93e24a6958", ResourceVersion:"1487", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff", Pod:"nginx-deployment-7fcdb87857-cp47w", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali752f0773864", MAC:"96:a1:ec:a1:e8:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:54:20.614741 containerd[1826]: 2026-01-23 23:54:20.611 [INFO][3588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff" Namespace="default" Pod="nginx-deployment-7fcdb87857-cp47w" WorkloadEndpoint="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:20.638916 containerd[1826]: time="2026-01-23T23:54:20.638772478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:20.638916 containerd[1826]: time="2026-01-23T23:54:20.638827959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:20.638916 containerd[1826]: time="2026-01-23T23:54:20.638845959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:20.639201 containerd[1826]: time="2026-01-23T23:54:20.639150679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:20.681966 containerd[1826]: time="2026-01-23T23:54:20.681899684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-cp47w,Uid:4b715e57-8a0e-4615-96e0-6b93e24a6958,Namespace:default,Attempt:1,} returns sandbox id \"e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff\"" Jan 23 23:54:20.683576 containerd[1826]: time="2026-01-23T23:54:20.683367287Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 23:54:21.326952 kubelet[2523]: E0123 23:54:21.326915 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:22.327651 kubelet[2523]: E0123 23:54:22.327566 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:22.518192 systemd-networkd[1408]: cali752f0773864: Gained IPv6LL Jan 23 23:54:22.843356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount985196758.mount: Deactivated successfully. Jan 23 23:54:23.327862 kubelet[2523]: E0123 23:54:23.327818 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:23.587005 containerd[1826]: time="2026-01-23T23:54:23.586883822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:23.589672 containerd[1826]: time="2026-01-23T23:54:23.589506466Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=62404643" Jan 23 23:54:23.592470 containerd[1826]: time="2026-01-23T23:54:23.592445430Z" level=info msg="ImageCreate event name:\"sha256:3e4ccf401ba9f89a59873e31faf9ee80cc24b5cb6b8dc15c4e5393551cdaeb58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:23.597467 containerd[1826]: time="2026-01-23T23:54:23.597437877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:23.598398 containerd[1826]: time="2026-01-23T23:54:23.598370678Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:3e4ccf401ba9f89a59873e31faf9ee80cc24b5cb6b8dc15c4e5393551cdaeb58\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"62404521\" in 2.914973751s" Jan 23 23:54:23.598597 containerd[1826]: time="2026-01-23T23:54:23.598490198Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3e4ccf401ba9f89a59873e31faf9ee80cc24b5cb6b8dc15c4e5393551cdaeb58\"" Jan 23 23:54:23.600463 containerd[1826]: time="2026-01-23T23:54:23.600438001Z" level=info msg="CreateContainer within sandbox \"e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 23 23:54:23.631163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076472193.mount: Deactivated successfully. Jan 23 23:54:23.638114 containerd[1826]: time="2026-01-23T23:54:23.638074732Z" level=info msg="CreateContainer within sandbox \"e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ff77c3ce5513dce9d18a14516f5bbaa7c34eba576d9c9ac1e3f1158191939179\"" Jan 23 23:54:23.638672 containerd[1826]: time="2026-01-23T23:54:23.638603573Z" level=info msg="StartContainer for \"ff77c3ce5513dce9d18a14516f5bbaa7c34eba576d9c9ac1e3f1158191939179\"" Jan 23 23:54:23.682448 containerd[1826]: time="2026-01-23T23:54:23.682391353Z" level=info msg="StartContainer for \"ff77c3ce5513dce9d18a14516f5bbaa7c34eba576d9c9ac1e3f1158191939179\" returns successfully" Jan 23 23:54:24.328255 kubelet[2523]: E0123 23:54:24.328211 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:25.329368 kubelet[2523]: E0123 23:54:25.329310 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:26.329867 kubelet[2523]: E0123 23:54:26.329822 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:27.330335 kubelet[2523]: E0123 23:54:27.330278 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:28.331417 kubelet[2523]: E0123 23:54:28.331377 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:29.332209 kubelet[2523]: E0123 23:54:29.332164 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:30.303460 kubelet[2523]: E0123 23:54:30.303425 2523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:30.332962 kubelet[2523]: E0123 23:54:30.332929 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:30.379773 containerd[1826]: time="2026-01-23T23:54:30.379730038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:54:30.656900 containerd[1826]: time="2026-01-23T23:54:30.656839740Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:30.659958 containerd[1826]: time="2026-01-23T23:54:30.659927066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:54:30.660031 containerd[1826]: time="2026-01-23T23:54:30.660015786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:54:30.660205 kubelet[2523]: E0123 23:54:30.660157 2523 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:54:30.660257 kubelet[2523]: E0123 23:54:30.660216 2523 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:54:30.660689 kubelet[2523]: E0123 23:54:30.660339 2523 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcdqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fc6m5_calico-system(6cc706f1-f210-4e7b-b9e2-07fb02a22dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:30.662269 containerd[1826]: time="2026-01-23T23:54:30.662240550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:54:30.931915 containerd[1826]: time="2026-01-23T23:54:30.931694676Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:30.936303 containerd[1826]: time="2026-01-23T23:54:30.936212325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:54:30.936303 containerd[1826]: time="2026-01-23T23:54:30.936278685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:54:30.936427 kubelet[2523]: E0123 23:54:30.936383 2523 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:54:30.936484 kubelet[2523]: E0123 23:54:30.936443 2523 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:54:30.936824 kubelet[2523]: E0123 23:54:30.936568 2523 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcdqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fc6m5_calico-system(6cc706f1-f210-4e7b-b9e2-07fb02a22dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:30.937743 kubelet[2523]: E0123 23:54:30.937714 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:54:31.333040 kubelet[2523]: E0123 23:54:31.332993 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:32.333454 kubelet[2523]: E0123 23:54:32.333417 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:32.916547 kubelet[2523]: I0123 23:54:32.914814 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-cp47w" podStartSLOduration=25.99829507 podStartE2EDuration="28.914796103s" podCreationTimestamp="2026-01-23 23:54:04 +0000 UTC" firstStartedPulling="2026-01-23 23:54:20.682901806 +0000 UTC m=+31.033560174" lastFinishedPulling="2026-01-23 23:54:23.599402839 +0000 UTC m=+33.950061207" observedRunningTime="2026-01-23 23:54:24.472417799 +0000 UTC m=+34.823083207" watchObservedRunningTime="2026-01-23 23:54:32.914796103 +0000 UTC m=+43.265454431" Jan 23 23:54:32.951171 kubelet[2523]: I0123 23:54:32.951138 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9dxt\" (UniqueName: \"kubernetes.io/projected/55b38c00-58cb-4fd3-8429-a629ae45f11c-kube-api-access-g9dxt\") pod \"nfs-server-provisioner-0\" (UID: \"55b38c00-58cb-4fd3-8429-a629ae45f11c\") " pod="default/nfs-server-provisioner-0" Jan 23 23:54:32.951387 kubelet[2523]: I0123 23:54:32.951365 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/55b38c00-58cb-4fd3-8429-a629ae45f11c-data\") pod \"nfs-server-provisioner-0\" (UID: \"55b38c00-58cb-4fd3-8429-a629ae45f11c\") " pod="default/nfs-server-provisioner-0" Jan 23 23:54:33.218231 containerd[1826]: time="2026-01-23T23:54:33.218123894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:55b38c00-58cb-4fd3-8429-a629ae45f11c,Namespace:default,Attempt:0,}" Jan 23 23:54:33.333681 kubelet[2523]: E0123 23:54:33.333639 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:33.354491 systemd-networkd[1408]: cali60e51b789ff: Link UP Jan 23 23:54:33.355978 systemd-networkd[1408]: cali60e51b789ff: Gained carrier Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.282 [INFO][3756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.20.35-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 55b38c00-58cb-4fd3-8429-a629ae45f11c 1563 0 2026-01-23 23:54:32 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.20.35 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.35-k8s-nfs--server--provisioner--0-" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.282 [INFO][3756] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.35-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.303 [INFO][3768] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" HandleID="k8s-pod-network.ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" Workload="10.200.20.35-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.303 [INFO][3768] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" HandleID="k8s-pod-network.ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" Workload="10.200.20.35-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"default", "node":"10.200.20.35", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-23 23:54:33.303461941 +0000 UTC"}, Hostname:"10.200.20.35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.303 [INFO][3768] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.303 [INFO][3768] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.303 [INFO][3768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.20.35' Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.313 [INFO][3768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" host="10.200.20.35" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.317 [INFO][3768] ipam/ipam.go 394: Looking up existing affinities for host host="10.200.20.35" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.321 [INFO][3768] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="10.200.20.35" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.323 [INFO][3768] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="10.200.20.35" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.326 [INFO][3768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="10.200.20.35" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.326 [INFO][3768] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" host="10.200.20.35" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.327 [INFO][3768] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4 Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.332 [INFO][3768] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" host="10.200.20.35" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.348 [INFO][3768] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.90.3/26] block=192.168.90.0/26 handle="k8s-pod-network.ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" host="10.200.20.35" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.348 [INFO][3768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.3/26] handle="k8s-pod-network.ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" host="10.200.20.35" Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.348 [INFO][3768] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:54:33.370431 containerd[1826]: 2026-01-23 23:54:33.348 [INFO][3768] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.90.3/26] IPv6=[] ContainerID="ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" HandleID="k8s-pod-network.ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" Workload="10.200.20.35-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:54:33.370989 containerd[1826]: 2026-01-23 23:54:33.350 [INFO][3756] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.35-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"55b38c00-58cb-4fd3-8429-a629ae45f11c", ResourceVersion:"1563", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.90.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:54:33.370989 containerd[1826]: 2026-01-23 23:54:33.350 [INFO][3756] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.3/32] ContainerID="ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.35-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:54:33.370989 containerd[1826]: 2026-01-23 23:54:33.351 [INFO][3756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.35-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:54:33.370989 containerd[1826]: 2026-01-23 23:54:33.356 [INFO][3756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.35-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:54:33.371114 containerd[1826]: 2026-01-23 23:54:33.356 [INFO][3756] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.35-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"55b38c00-58cb-4fd3-8429-a629ae45f11c", ResourceVersion:"1563", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.90.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"9a:2d:02:01:d3:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:54:33.371114 containerd[1826]: 2026-01-23 23:54:33.367 [INFO][3756] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.20.35-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:54:33.391303 containerd[1826]: time="2026-01-23T23:54:33.391204592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:33.391303 containerd[1826]: time="2026-01-23T23:54:33.391259072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:33.391303 containerd[1826]: time="2026-01-23T23:54:33.391270752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:33.392494 containerd[1826]: time="2026-01-23T23:54:33.391353512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:33.431074 containerd[1826]: time="2026-01-23T23:54:33.431038989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:55b38c00-58cb-4fd3-8429-a629ae45f11c,Namespace:default,Attempt:0,} returns sandbox id \"ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4\"" Jan 23 23:54:33.433221 containerd[1826]: time="2026-01-23T23:54:33.432774313Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 23 23:54:34.334066 kubelet[2523]: E0123 23:54:34.334014 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:35.318029 systemd-networkd[1408]: cali60e51b789ff: Gained IPv6LL Jan 23 23:54:35.335555 kubelet[2523]: E0123 23:54:35.335053 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:35.642506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount222654765.mount: Deactivated successfully. Jan 23 23:54:36.336207 kubelet[2523]: E0123 23:54:36.336152 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:37.304838 containerd[1826]: time="2026-01-23T23:54:37.304787862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.307803 containerd[1826]: time="2026-01-23T23:54:37.307406067Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jan 23 23:54:37.312789 containerd[1826]: time="2026-01-23T23:54:37.312686638Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.317080 containerd[1826]: time="2026-01-23T23:54:37.317030886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.317991 containerd[1826]: time="2026-01-23T23:54:37.317954128Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.885146895s" Jan 23 23:54:37.318049 containerd[1826]: time="2026-01-23T23:54:37.317991728Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 23 23:54:37.320829 containerd[1826]: time="2026-01-23T23:54:37.320795933Z" level=info msg="CreateContainer within sandbox \"ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 23 23:54:37.337085 kubelet[2523]: E0123 23:54:37.337039 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:37.346366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066610147.mount: Deactivated successfully. Jan 23 23:54:37.356329 containerd[1826]: time="2026-01-23T23:54:37.356274843Z" level=info msg="CreateContainer within sandbox \"ce9b75e921a8c80ef2d5d95c45ed8eb633b926b5c845f62fd0410b2c88927ee4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"35b6a7d3f3d0dbf4e7855bcd711cafb931525c4ac4f36d8a80a2e291efb4c89c\"" Jan 23 23:54:37.357014 containerd[1826]: time="2026-01-23T23:54:37.356983724Z" level=info msg="StartContainer for \"35b6a7d3f3d0dbf4e7855bcd711cafb931525c4ac4f36d8a80a2e291efb4c89c\"" Jan 23 23:54:37.408752 containerd[1826]: time="2026-01-23T23:54:37.408697905Z" level=info msg="StartContainer for \"35b6a7d3f3d0dbf4e7855bcd711cafb931525c4ac4f36d8a80a2e291efb4c89c\" returns successfully" Jan 23 23:54:37.492995 kubelet[2523]: I0123 23:54:37.492944 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.606254371 podStartE2EDuration="5.492926229s" podCreationTimestamp="2026-01-23 23:54:32 +0000 UTC" firstStartedPulling="2026-01-23 23:54:33.432456112 +0000 UTC m=+43.783114480" lastFinishedPulling="2026-01-23 23:54:37.31912797 +0000 UTC m=+47.669786338" observedRunningTime="2026-01-23 23:54:37.492790429 +0000 UTC m=+47.843448797" watchObservedRunningTime="2026-01-23 23:54:37.492926229 +0000 UTC m=+47.843584597" Jan 23 23:54:38.337607 kubelet[2523]: E0123 23:54:38.337556 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:39.338167 kubelet[2523]: E0123 23:54:39.338130 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:40.339010 kubelet[2523]: E0123 23:54:40.338962 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:41.339654 kubelet[2523]: E0123 23:54:41.339620 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:42.340420 kubelet[2523]: E0123 23:54:42.340384 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:43.340622 kubelet[2523]: E0123 23:54:43.340585 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:44.340781 kubelet[2523]: E0123 23:54:44.340725 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:44.381075 kubelet[2523]: E0123 23:54:44.380940 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:54:45.341583 kubelet[2523]: E0123 23:54:45.341525 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:46.342609 kubelet[2523]: E0123 23:54:46.342574 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:47.343143 kubelet[2523]: E0123 23:54:47.343105 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:48.343964 kubelet[2523]: E0123 23:54:48.343918 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:49.344767 kubelet[2523]: E0123 23:54:49.344730 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:50.303626 kubelet[2523]: E0123 23:54:50.303583 2523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:50.331188 containerd[1826]: time="2026-01-23T23:54:50.331154753Z" level=info msg="StopPodSandbox for \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\"" Jan 23 23:54:50.345717 kubelet[2523]: E0123 23:54:50.345671 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.365 [WARNING][3964] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"4b715e57-8a0e-4615-96e0-6b93e24a6958", ResourceVersion:"1503", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff", Pod:"nginx-deployment-7fcdb87857-cp47w", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali752f0773864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.365 [INFO][3964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.365 [INFO][3964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" iface="eth0" netns="" Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.365 [INFO][3964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.365 [INFO][3964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.387 [INFO][3971] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" HandleID="k8s-pod-network.96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.387 [INFO][3971] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.387 [INFO][3971] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.397 [WARNING][3971] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" HandleID="k8s-pod-network.96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.397 [INFO][3971] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" HandleID="k8s-pod-network.96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.399 [INFO][3971] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:54:50.401919 containerd[1826]: 2026-01-23 23:54:50.400 [INFO][3964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:50.402563 containerd[1826]: time="2026-01-23T23:54:50.401955003Z" level=info msg="TearDown network for sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\" successfully" Jan 23 23:54:50.402563 containerd[1826]: time="2026-01-23T23:54:50.401979243Z" level=info msg="StopPodSandbox for \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\" returns successfully" Jan 23 23:54:50.402563 containerd[1826]: time="2026-01-23T23:54:50.402461004Z" level=info msg="RemovePodSandbox for \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\"" Jan 23 23:54:50.402563 containerd[1826]: time="2026-01-23T23:54:50.402488564Z" level=info msg="Forcibly stopping sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\"" Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.437 [WARNING][3987] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"4b715e57-8a0e-4615-96e0-6b93e24a6958", ResourceVersion:"1503", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"e6fb77ec8eceef37fd6d63521abf7250b536a47b317bc9f2447360863b8450ff", Pod:"nginx-deployment-7fcdb87857-cp47w", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali752f0773864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.437 [INFO][3987] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.437 [INFO][3987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" iface="eth0" netns="" Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.437 [INFO][3987] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.437 [INFO][3987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.453 [INFO][3995] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" HandleID="k8s-pod-network.96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.454 [INFO][3995] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.454 [INFO][3995] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.462 [WARNING][3995] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" HandleID="k8s-pod-network.96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.462 [INFO][3995] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" HandleID="k8s-pod-network.96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Workload="10.200.20.35-k8s-nginx--deployment--7fcdb87857--cp47w-eth0" Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.464 [INFO][3995] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:54:50.467120 containerd[1826]: 2026-01-23 23:54:50.465 [INFO][3987] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64" Jan 23 23:54:50.468360 containerd[1826]: time="2026-01-23T23:54:50.467608643Z" level=info msg="TearDown network for sandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\" successfully" Jan 23 23:54:50.474836 containerd[1826]: time="2026-01-23T23:54:50.474808897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:54:50.474949 containerd[1826]: time="2026-01-23T23:54:50.474934897Z" level=info msg="RemovePodSandbox \"96f13471eac50233c17d70e293dcf3ace07dd3198df750962d1bdff76086ec64\" returns successfully" Jan 23 23:54:50.475481 containerd[1826]: time="2026-01-23T23:54:50.475452618Z" level=info msg="StopPodSandbox for \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\"" Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.507 [WARNING][4009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-csi--node--driver--fc6m5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6cc706f1-f210-4e7b-b9e2-07fb02a22dce", ResourceVersion:"1631", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69", Pod:"csi-node-driver-fc6m5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfd373037f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.507 [INFO][4009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.507 [INFO][4009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" iface="eth0" netns="" Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.507 [INFO][4009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.507 [INFO][4009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.524 [INFO][4016] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" HandleID="k8s-pod-network.6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.524 [INFO][4016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.525 [INFO][4016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.535 [WARNING][4016] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" HandleID="k8s-pod-network.6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.535 [INFO][4016] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" HandleID="k8s-pod-network.6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.539 [INFO][4016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:54:50.542367 containerd[1826]: 2026-01-23 23:54:50.540 [INFO][4009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:50.543268 containerd[1826]: time="2026-01-23T23:54:50.542412541Z" level=info msg="TearDown network for sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\" successfully" Jan 23 23:54:50.543268 containerd[1826]: time="2026-01-23T23:54:50.542470741Z" level=info msg="StopPodSandbox for \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\" returns successfully" Jan 23 23:54:50.543268 containerd[1826]: time="2026-01-23T23:54:50.542894822Z" level=info msg="RemovePodSandbox for \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\"" Jan 23 23:54:50.543268 containerd[1826]: time="2026-01-23T23:54:50.542929262Z" level=info msg="Forcibly stopping sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\"" Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.585 [WARNING][4031] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-csi--node--driver--fc6m5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6cc706f1-f210-4e7b-b9e2-07fb02a22dce", ResourceVersion:"1631", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"c887975928b5dd23a6e6cbba751a06c61c4a18b19caee788f9db74228b52ed69", Pod:"csi-node-driver-fc6m5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califfd373037f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.585 [INFO][4031] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.585 [INFO][4031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" iface="eth0" netns="" Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.585 [INFO][4031] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.585 [INFO][4031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.602 [INFO][4038] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" HandleID="k8s-pod-network.6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.602 [INFO][4038] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.602 [INFO][4038] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.614 [WARNING][4038] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" HandleID="k8s-pod-network.6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.614 [INFO][4038] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" HandleID="k8s-pod-network.6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Workload="10.200.20.35-k8s-csi--node--driver--fc6m5-eth0" Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.618 [INFO][4038] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:54:50.621437 containerd[1826]: 2026-01-23 23:54:50.619 [INFO][4031] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca" Jan 23 23:54:50.622076 containerd[1826]: time="2026-01-23T23:54:50.621485206Z" level=info msg="TearDown network for sandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\" successfully" Jan 23 23:54:50.627854 containerd[1826]: time="2026-01-23T23:54:50.627819778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:54:50.627911 containerd[1826]: time="2026-01-23T23:54:50.627868578Z" level=info msg="RemovePodSandbox \"6dc5f7c573262c508e2029381439f3ddd2c166d3bea7bd9e38684317468c18ca\" returns successfully" Jan 23 23:54:51.346087 kubelet[2523]: E0123 23:54:51.346053 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:52.346361 kubelet[2523]: E0123 23:54:52.346330 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:53.347028 kubelet[2523]: E0123 23:54:53.346987 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:54.347900 kubelet[2523]: E0123 23:54:54.347869 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:55.348988 kubelet[2523]: E0123 23:54:55.348946 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:55.378465 containerd[1826]: time="2026-01-23T23:54:55.378383815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:54:55.665004 containerd[1826]: time="2026-01-23T23:54:55.664815878Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:55.668400 containerd[1826]: time="2026-01-23T23:54:55.668294644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:54:55.668400 containerd[1826]: time="2026-01-23T23:54:55.668360764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:54:55.668548 kubelet[2523]: E0123 23:54:55.668488 2523 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:54:55.668548 kubelet[2523]: E0123 23:54:55.668538 2523 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:54:55.668692 kubelet[2523]: E0123 23:54:55.668647 2523 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcdqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fc6m5_calico-system(6cc706f1-f210-4e7b-b9e2-07fb02a22dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:55.670700 containerd[1826]: time="2026-01-23T23:54:55.670571128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:54:55.932398 containerd[1826]: time="2026-01-23T23:54:55.932136344Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:55.935298 containerd[1826]: time="2026-01-23T23:54:55.935208230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:54:55.935298 containerd[1826]: time="2026-01-23T23:54:55.935274070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:54:55.935455 kubelet[2523]: E0123 23:54:55.935392 2523 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:54:55.935455 kubelet[2523]: E0123 23:54:55.935434 2523 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:54:55.935603 kubelet[2523]: E0123 23:54:55.935548 2523 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcdqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fc6m5_calico-system(6cc706f1-f210-4e7b-b9e2-07fb02a22dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:55.936859 kubelet[2523]: E0123 23:54:55.936819 2523 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fc6m5" podUID="6cc706f1-f210-4e7b-b9e2-07fb02a22dce" Jan 23 23:54:56.349902 kubelet[2523]: E0123 23:54:56.349856 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:57.350930 kubelet[2523]: E0123 23:54:57.350887 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:58.351290 kubelet[2523]: E0123 23:54:58.351258 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:54:59.352026 kubelet[2523]: E0123 23:54:59.351982 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:55:00.352560 kubelet[2523]: E0123 23:55:00.352511 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:55:01.352935 kubelet[2523]: E0123 23:55:01.352895 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:55:02.353749 kubelet[2523]: E0123 23:55:02.353704 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:55:02.788899 kubelet[2523]: I0123 23:55:02.788797 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d9553b4-8aff-4cbc-9a01-0efe58431b8d\" (UniqueName: \"kubernetes.io/nfs/bf9f80b0-a021-4029-bb55-7e9d2d2459af-pvc-0d9553b4-8aff-4cbc-9a01-0efe58431b8d\") pod \"test-pod-1\" (UID: \"bf9f80b0-a021-4029-bb55-7e9d2d2459af\") " pod="default/test-pod-1" Jan 23 23:55:02.788899 kubelet[2523]: I0123 23:55:02.788837 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmjfl\" (UniqueName: \"kubernetes.io/projected/bf9f80b0-a021-4029-bb55-7e9d2d2459af-kube-api-access-dmjfl\") pod \"test-pod-1\" (UID: \"bf9f80b0-a021-4029-bb55-7e9d2d2459af\") " pod="default/test-pod-1" Jan 23 23:55:02.992630 kernel: FS-Cache: Loaded Jan 23 23:55:03.051351 kernel: RPC: Registered named UNIX socket transport module. Jan 23 23:55:03.051469 kernel: RPC: Registered udp transport module. Jan 23 23:55:03.051492 kernel: RPC: Registered tcp transport module. Jan 23 23:55:03.056780 kernel: RPC: Registered tcp-with-tls transport module. Jan 23 23:55:03.056860 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 23 23:55:03.344200 kernel: NFS: Registering the id_resolver key type Jan 23 23:55:03.344302 kernel: Key type id_resolver registered Jan 23 23:55:03.344322 kernel: Key type id_legacy registered Jan 23 23:55:03.354578 kubelet[2523]: E0123 23:55:03.354549 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:55:03.482345 nfsidmap[4064]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-n-9dffd30f3c' Jan 23 23:55:03.553948 nfsidmap[4065]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-n-9dffd30f3c' Jan 23 23:55:03.609186 containerd[1826]: time="2026-01-23T23:55:03.608880801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bf9f80b0-a021-4029-bb55-7e9d2d2459af,Namespace:default,Attempt:0,}" Jan 23 23:55:03.735072 systemd-networkd[1408]: cali5ec59c6bf6e: Link UP Jan 23 23:55:03.735743 systemd-networkd[1408]: cali5ec59c6bf6e: Gained carrier Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.664 [INFO][4067] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.20.35-k8s-test--pod--1-eth0 default bf9f80b0-a021-4029-bb55-7e9d2d2459af 1710 0 2026-01-23 23:54:34 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.20.35 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.35-k8s-test--pod--1-" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.664 [INFO][4067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.35-k8s-test--pod--1-eth0" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.684 [INFO][4078] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" HandleID="k8s-pod-network.9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" Workload="10.200.20.35-k8s-test--pod--1-eth0" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.684 [INFO][4078] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" HandleID="k8s-pod-network.9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" Workload="10.200.20.35-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab180), Attrs:map[string]string{"namespace":"default", "node":"10.200.20.35", "pod":"test-pod-1", "timestamp":"2026-01-23 23:55:03.68429898 +0000 UTC"}, Hostname:"10.200.20.35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.684 [INFO][4078] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.684 [INFO][4078] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.684 [INFO][4078] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.20.35' Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.697 [INFO][4078] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" host="10.200.20.35" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.702 [INFO][4078] ipam/ipam.go 394: Looking up existing affinities for host host="10.200.20.35" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.707 [INFO][4078] ipam/ipam.go 511: Trying affinity for 192.168.90.0/26 host="10.200.20.35" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.708 [INFO][4078] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.0/26 host="10.200.20.35" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.710 [INFO][4078] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="10.200.20.35" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.710 [INFO][4078] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" host="10.200.20.35" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.712 [INFO][4078] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666 Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.716 [INFO][4078] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" host="10.200.20.35" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.728 [INFO][4078] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.90.4/26] block=192.168.90.0/26 handle="k8s-pod-network.9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" host="10.200.20.35" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.728 [INFO][4078] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.4/26] handle="k8s-pod-network.9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" host="10.200.20.35" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.728 [INFO][4078] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.728 [INFO][4078] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.90.4/26] IPv6=[] ContainerID="9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" HandleID="k8s-pod-network.9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" Workload="10.200.20.35-k8s-test--pod--1-eth0" Jan 23 23:55:03.746737 containerd[1826]: 2026-01-23 23:55:03.730 [INFO][4067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.35-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"bf9f80b0-a021-4029-bb55-7e9d2d2459af", ResourceVersion:"1710", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:55:03.747275 containerd[1826]: 2026-01-23 23:55:03.730 [INFO][4067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.4/32] ContainerID="9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.35-k8s-test--pod--1-eth0" Jan 23 23:55:03.747275 containerd[1826]: 2026-01-23 23:55:03.730 [INFO][4067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.35-k8s-test--pod--1-eth0" Jan 23 23:55:03.747275 containerd[1826]: 2026-01-23 23:55:03.734 [INFO][4067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.35-k8s-test--pod--1-eth0" Jan 23 23:55:03.747275 containerd[1826]: 2026-01-23 23:55:03.734 [INFO][4067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.35-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.35-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"bf9f80b0-a021-4029-bb55-7e9d2d2459af", ResourceVersion:"1710", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.20.35", ContainerID:"9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"6e:35:2a:45:4e:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:55:03.747275 containerd[1826]: 2026-01-23 23:55:03.744 [INFO][4067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.20.35-k8s-test--pod--1-eth0" Jan 23 23:55:03.767156 containerd[1826]: time="2026-01-23T23:55:03.766484051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:03.767156 containerd[1826]: time="2026-01-23T23:55:03.766651131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:03.767156 containerd[1826]: time="2026-01-23T23:55:03.766680091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:03.767156 containerd[1826]: time="2026-01-23T23:55:03.766771411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:03.809618 containerd[1826]: time="2026-01-23T23:55:03.809510930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bf9f80b0-a021-4029-bb55-7e9d2d2459af,Namespace:default,Attempt:0,} returns sandbox id \"9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666\"" Jan 23 23:55:03.811199 containerd[1826]: time="2026-01-23T23:55:03.811087773Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 23:55:04.126568 containerd[1826]: time="2026-01-23T23:55:04.125691031Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:04.130541 containerd[1826]: time="2026-01-23T23:55:04.128549516Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 23 23:55:04.131381 containerd[1826]: time="2026-01-23T23:55:04.131351121Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:3e4ccf401ba9f89a59873e31faf9ee80cc24b5cb6b8dc15c4e5393551cdaeb58\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"62404521\" in 320.235668ms" Jan 23 23:55:04.131460 containerd[1826]: time="2026-01-23T23:55:04.131447162Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3e4ccf401ba9f89a59873e31faf9ee80cc24b5cb6b8dc15c4e5393551cdaeb58\"" Jan 23 23:55:04.133456 containerd[1826]: time="2026-01-23T23:55:04.133425885Z" level=info msg="CreateContainer within sandbox \"9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 23 23:55:04.166492 containerd[1826]: time="2026-01-23T23:55:04.166448506Z" level=info msg="CreateContainer within sandbox \"9175e066c6270650cc45de01cde264afeedfd5c24bea2e2c8d1af3f3b6ed5666\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"36d8459df2b631788969d8cc8bd9a4a882e932ebd063126bda34f517c61f5ec6\"" Jan 23 23:55:04.167554 containerd[1826]: time="2026-01-23T23:55:04.167109947Z" level=info msg="StartContainer for \"36d8459df2b631788969d8cc8bd9a4a882e932ebd063126bda34f517c61f5ec6\"" Jan 23 23:55:04.216333 containerd[1826]: time="2026-01-23T23:55:04.216294477Z" level=info msg="StartContainer for \"36d8459df2b631788969d8cc8bd9a4a882e932ebd063126bda34f517c61f5ec6\" returns successfully" Jan 23 23:55:04.355523 kubelet[2523]: E0123 23:55:04.355486 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:55:05.356075 kubelet[2523]: E0123 23:55:05.356036 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:55:05.397664 systemd-networkd[1408]: cali5ec59c6bf6e: Gained IPv6LL Jan 23 23:55:06.357195 kubelet[2523]: E0123 23:55:06.357165 2523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"