Mar 17 17:49:23.904421 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:49:23.904444 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 17:49:23.904454 kernel: KASLR enabled Mar 17 17:49:23.904460 kernel: efi: EFI v2.7 by EDK II Mar 17 17:49:23.904465 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 17 17:49:23.904471 kernel: random: crng init done Mar 17 17:49:23.904478 kernel: secureboot: Secure boot disabled Mar 17 17:49:23.904484 kernel: ACPI: Early table checksum verification disabled Mar 17 17:49:23.904490 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 17 17:49:23.904497 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:49:23.904503 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:23.904509 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:23.904515 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:23.904521 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:23.904528 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:23.904535 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:23.904541 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:23.904549 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:23.904555 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:23.904561 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 17:49:23.904567 kernel: NUMA: Failed to initialise from firmware Mar 17 17:49:23.904573 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:49:23.904580 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 17 17:49:23.904585 kernel: Zone ranges: Mar 17 17:49:23.904592 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:49:23.904599 kernel: DMA32 empty Mar 17 17:49:23.904605 kernel: Normal empty Mar 17 17:49:23.904611 kernel: Movable zone start for each node Mar 17 17:49:23.904617 kernel: Early memory node ranges Mar 17 17:49:23.904623 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 17 17:49:23.904629 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 17 17:49:23.904635 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 17 17:49:23.904641 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 17 17:49:23.904646 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 17 17:49:23.904653 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 17 17:49:23.904659 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 17 17:49:23.904665 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 17 17:49:23.904673 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 17 17:49:23.904679 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:49:23.904685 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 17:49:23.904701 kernel: psci: probing for conduit method from ACPI. Mar 17 17:49:23.904708 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:49:23.904715 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:49:23.904723 kernel: psci: Trusted OS migration not required Mar 17 17:49:23.904729 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:49:23.904736 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:49:23.904742 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:49:23.904749 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:49:23.904755 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 17:49:23.904762 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:49:23.904768 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:49:23.904775 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:49:23.904782 kernel: CPU features: detected: Spectre-v4 Mar 17 17:49:23.904790 kernel: CPU features: detected: Spectre-BHB Mar 17 17:49:23.904796 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:49:23.904803 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:49:23.904821 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:49:23.904828 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:49:23.904834 kernel: alternatives: applying boot alternatives Mar 17 17:49:23.904842 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:49:23.904849 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:49:23.904855 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:49:23.904862 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:49:23.904868 kernel: Fallback order for Node 0: 0 Mar 17 17:49:23.904876 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 17:49:23.904883 kernel: Policy zone: DMA Mar 17 17:49:23.904889 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:49:23.904895 kernel: software IO TLB: area num 4. Mar 17 17:49:23.904902 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 17 17:49:23.904909 kernel: Memory: 2387540K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 184748K reserved, 0K cma-reserved) Mar 17 17:49:23.904915 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:49:23.904922 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:49:23.904929 kernel: rcu: RCU event tracing is enabled. Mar 17 17:49:23.904935 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:49:23.904942 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:49:23.904949 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:49:23.904957 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:49:23.904963 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:49:23.904969 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:49:23.904976 kernel: GICv3: 256 SPIs implemented Mar 17 17:49:23.904982 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:49:23.904988 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:49:23.904995 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:49:23.905001 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:49:23.905007 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:49:23.905014 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:49:23.905028 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:49:23.905037 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 17 17:49:23.905043 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 17 17:49:23.905050 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:49:23.905056 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:49:23.905063 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:49:23.905069 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:49:23.905076 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:49:23.905082 kernel: arm-pv: using stolen time PV Mar 17 17:49:23.905089 kernel: Console: colour dummy device 80x25 Mar 17 17:49:23.905096 kernel: ACPI: Core revision 20230628 Mar 17 17:49:23.905103 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:49:23.905111 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:49:23.905118 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:49:23.905124 kernel: landlock: Up and running. Mar 17 17:49:23.905131 kernel: SELinux: Initializing. Mar 17 17:49:23.905137 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:49:23.905147 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:49:23.905154 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:49:23.905161 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:49:23.905168 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:49:23.905176 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:49:23.905182 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:49:23.905189 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:49:23.905195 kernel: Remapping and enabling EFI services. Mar 17 17:49:23.905202 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:49:23.905209 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:49:23.905215 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:49:23.905222 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 17 17:49:23.905229 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:49:23.905237 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:49:23.905244 kernel: Detected PIPT I-cache on CPU2 Mar 17 17:49:23.905256 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 17:49:23.905265 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 17 17:49:23.905274 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:49:23.905281 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 17:49:23.905288 kernel: Detected PIPT I-cache on CPU3 Mar 17 17:49:23.905297 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 17:49:23.905306 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 17 17:49:23.905317 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:49:23.905326 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 17:49:23.905334 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:49:23.905341 kernel: SMP: Total of 4 processors activated. Mar 17 17:49:23.905349 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:49:23.905356 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:49:23.905363 kernel: CPU features: detected: Common not Private translations Mar 17 17:49:23.905370 kernel: CPU features: detected: CRC32 instructions Mar 17 17:49:23.905378 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:49:23.905385 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:49:23.905392 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:49:23.905399 kernel: CPU features: detected: Privileged Access Never Mar 17 17:49:23.905406 kernel: CPU features: detected: RAS Extension Support Mar 17 17:49:23.905413 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:49:23.905420 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:49:23.905427 kernel: alternatives: applying system-wide alternatives Mar 17 17:49:23.905433 kernel: devtmpfs: initialized Mar 17 17:49:23.905440 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:49:23.905449 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:49:23.905456 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:49:23.905463 kernel: SMBIOS 3.0.0 present. Mar 17 17:49:23.905470 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 17 17:49:23.905477 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:49:23.905484 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:49:23.905491 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:49:23.905498 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:49:23.905506 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:49:23.905514 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Mar 17 17:49:23.905521 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:49:23.905528 kernel: cpuidle: using governor menu Mar 17 17:49:23.905535 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:49:23.905542 kernel: ASID allocator initialised with 32768 entries Mar 17 17:49:23.905549 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:49:23.905555 kernel: Serial: AMBA PL011 UART driver Mar 17 17:49:23.905562 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:49:23.905570 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:49:23.905577 kernel: Modules: 509280 pages in range for PLT usage Mar 17 17:49:23.905584 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:49:23.905591 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:49:23.905598 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:49:23.905605 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:49:23.905612 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:49:23.905619 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:49:23.905625 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:49:23.905632 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:49:23.905641 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:49:23.905648 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:49:23.905654 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:49:23.905661 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:49:23.905668 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:49:23.905675 kernel: ACPI: Interpreter enabled Mar 17 17:49:23.905682 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:49:23.905689 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:49:23.905700 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:49:23.905710 kernel: printk: console [ttyAMA0] enabled Mar 17 17:49:23.905717 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:49:23.905877 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:49:23.905950 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:49:23.906029 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:49:23.906102 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:49:23.906167 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:49:23.906179 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:49:23.906186 kernel: PCI host bridge to bus 0000:00 Mar 17 17:49:23.906259 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:49:23.906320 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:49:23.906379 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:49:23.906436 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:49:23.906516 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:49:23.906597 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:49:23.906666 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 17:49:23.906744 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 17:49:23.906812 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:49:23.906877 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:49:23.906943 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 17:49:23.907013 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 17:49:23.907096 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:49:23.907153 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:49:23.907210 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:49:23.907220 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:49:23.907227 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:49:23.907234 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:49:23.907241 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:49:23.907250 kernel: iommu: Default domain type: Translated Mar 17 17:49:23.907257 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:49:23.907264 kernel: efivars: Registered efivars operations Mar 17 17:49:23.907271 kernel: vgaarb: loaded Mar 17 17:49:23.907278 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:49:23.907285 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:49:23.907292 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:49:23.907299 kernel: pnp: PnP ACPI init Mar 17 17:49:23.907376 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:49:23.907388 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:49:23.907395 kernel: NET: Registered PF_INET protocol family Mar 17 17:49:23.907403 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:49:23.907410 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:49:23.907417 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:49:23.907424 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:49:23.907431 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:49:23.907438 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:49:23.907446 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:49:23.907453 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:49:23.907460 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:49:23.907467 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:49:23.907474 kernel: kvm [1]: HYP mode not available Mar 17 17:49:23.907481 kernel: Initialise system trusted keyrings Mar 17 17:49:23.907488 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:49:23.907495 kernel: Key type asymmetric registered Mar 17 17:49:23.907502 kernel: Asymmetric key parser 'x509' registered Mar 17 17:49:23.907509 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:49:23.907518 kernel: io scheduler mq-deadline registered Mar 17 17:49:23.907525 kernel: io scheduler kyber registered Mar 17 17:49:23.907532 kernel: io scheduler bfq registered Mar 17 17:49:23.907539 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:49:23.907546 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:49:23.907553 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:49:23.907620 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 17:49:23.907630 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:49:23.907637 kernel: thunder_xcv, ver 1.0 Mar 17 17:49:23.907645 kernel: thunder_bgx, ver 1.0 Mar 17 17:49:23.907653 kernel: nicpf, ver 1.0 Mar 17 17:49:23.907660 kernel: nicvf, ver 1.0 Mar 17 17:49:23.907742 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:49:23.907805 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:49:23 UTC (1742233763) Mar 17 17:49:23.907815 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:49:23.907822 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:49:23.907829 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:49:23.907838 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:49:23.907845 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:49:23.907853 kernel: Segment Routing with IPv6 Mar 17 17:49:23.907859 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:49:23.907867 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:49:23.907874 kernel: Key type dns_resolver registered Mar 17 17:49:23.907881 kernel: registered taskstats version 1 Mar 17 17:49:23.907888 kernel: Loading compiled-in X.509 certificates Mar 17 17:49:23.907895 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 17:49:23.907903 kernel: Key type .fscrypt registered Mar 17 17:49:23.907910 kernel: Key type fscrypt-provisioning registered Mar 17 17:49:23.907917 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:49:23.907924 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:49:23.907931 kernel: ima: No architecture policies found Mar 17 17:49:23.907939 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:49:23.907945 kernel: clk: Disabling unused clocks Mar 17 17:49:23.907952 kernel: Freeing unused kernel memory: 38336K Mar 17 17:49:23.907960 kernel: Run /init as init process Mar 17 17:49:23.907967 kernel: with arguments: Mar 17 17:49:23.907974 kernel: /init Mar 17 17:49:23.907981 kernel: with environment: Mar 17 17:49:23.907988 kernel: HOME=/ Mar 17 17:49:23.907995 kernel: TERM=linux Mar 17 17:49:23.908002 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:49:23.908009 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:49:23.908028 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:49:23.908044 systemd[1]: Detected virtualization kvm. Mar 17 17:49:23.908053 systemd[1]: Detected architecture arm64. Mar 17 17:49:23.908060 systemd[1]: Running in initrd. Mar 17 17:49:23.908068 systemd[1]: No hostname configured, using default hostname. Mar 17 17:49:23.908075 systemd[1]: Hostname set to . Mar 17 17:49:23.908082 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:49:23.908090 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:49:23.908097 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:49:23.908108 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:49:23.908116 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:49:23.908123 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:49:23.908131 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:49:23.908140 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:49:23.908148 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:49:23.908158 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:49:23.908165 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:49:23.908173 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:49:23.908180 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:49:23.908188 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:49:23.908196 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:49:23.908203 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:49:23.908211 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:49:23.908219 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:49:23.908229 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:49:23.908236 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:49:23.908258 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:49:23.908266 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:49:23.908274 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:49:23.908281 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:49:23.908290 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:49:23.908297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:49:23.908307 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:49:23.908314 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:49:23.908322 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:49:23.908329 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:49:23.908337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:23.908345 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:49:23.908353 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:49:23.908362 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:49:23.908370 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:49:23.908400 systemd-journald[239]: Collecting audit messages is disabled. Mar 17 17:49:23.908422 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:49:23.908431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:23.908438 kernel: Bridge firewalling registered Mar 17 17:49:23.908446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:49:23.908455 systemd-journald[239]: Journal started Mar 17 17:49:23.908474 systemd-journald[239]: Runtime Journal (/run/log/journal/83f55813692f4754a2b47accd8b6420d) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:49:23.885397 systemd-modules-load[240]: Inserted module 'overlay' Mar 17 17:49:23.910133 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:49:23.905798 systemd-modules-load[240]: Inserted module 'br_netfilter' Mar 17 17:49:23.911638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:49:23.913665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:49:23.918251 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:49:23.919642 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:49:23.921605 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:49:23.927999 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:49:23.929887 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:49:23.933421 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:23.934534 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:49:23.957269 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:49:23.959190 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:49:23.968166 dracut-cmdline[275]: dracut-dracut-053 Mar 17 17:49:23.970770 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:49:23.994582 systemd-resolved[278]: Positive Trust Anchors: Mar 17 17:49:23.994601 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:49:23.994631 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:49:23.999296 systemd-resolved[278]: Defaulting to hostname 'linux'. Mar 17 17:49:24.000318 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:49:24.002434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:49:24.042046 kernel: SCSI subsystem initialized Mar 17 17:49:24.046042 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:49:24.054053 kernel: iscsi: registered transport (tcp) Mar 17 17:49:24.066315 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:49:24.066341 kernel: QLogic iSCSI HBA Driver Mar 17 17:49:24.114056 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:49:24.125193 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:49:24.143677 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:49:24.143748 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:49:24.144548 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:49:24.196051 kernel: raid6: neonx8 gen() 15754 MB/s Mar 17 17:49:24.213043 kernel: raid6: neonx4 gen() 15745 MB/s Mar 17 17:49:24.230037 kernel: raid6: neonx2 gen() 13170 MB/s Mar 17 17:49:24.247045 kernel: raid6: neonx1 gen() 10502 MB/s Mar 17 17:49:24.264044 kernel: raid6: int64x8 gen() 6773 MB/s Mar 17 17:49:24.281046 kernel: raid6: int64x4 gen() 7330 MB/s Mar 17 17:49:24.298037 kernel: raid6: int64x2 gen() 6096 MB/s Mar 17 17:49:24.315046 kernel: raid6: int64x1 gen() 5031 MB/s Mar 17 17:49:24.315063 kernel: raid6: using algorithm neonx8 gen() 15754 MB/s Mar 17 17:49:24.332039 kernel: raid6: .... xor() 11930 MB/s, rmw enabled Mar 17 17:49:24.332054 kernel: raid6: using neon recovery algorithm Mar 17 17:49:24.337036 kernel: xor: measuring software checksum speed Mar 17 17:49:24.337051 kernel: 8regs : 20069 MB/sec Mar 17 17:49:24.338034 kernel: 32regs : 21710 MB/sec Mar 17 17:49:24.339036 kernel: arm64_neon : 26155 MB/sec Mar 17 17:49:24.339049 kernel: xor: using function: arm64_neon (26155 MB/sec) Mar 17 17:49:24.389928 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:49:24.401471 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:49:24.414199 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:49:24.431744 systemd-udevd[461]: Using default interface naming scheme 'v255'. Mar 17 17:49:24.437509 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:49:24.457263 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:49:24.468815 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Mar 17 17:49:24.498114 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:49:24.513191 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:49:24.556510 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:49:24.567256 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:49:24.579163 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:49:24.580712 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:49:24.583201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:49:24.584823 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:49:24.594486 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:49:24.607684 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:49:24.611250 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 17 17:49:24.623114 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:49:24.623214 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:49:24.623225 kernel: GPT:9289727 != 19775487 Mar 17 17:49:24.623242 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:49:24.623253 kernel: GPT:9289727 != 19775487 Mar 17 17:49:24.623261 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:49:24.623270 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:49:24.623776 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:49:24.623898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:24.626294 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:49:24.627093 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:49:24.627283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:24.631381 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:24.639096 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (514) Mar 17 17:49:24.639408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:24.644133 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (529) Mar 17 17:49:24.651725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:24.670281 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:49:24.677581 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:49:24.684899 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:49:24.690803 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:49:24.691720 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:49:24.708158 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:49:24.710096 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:49:24.714819 disk-uuid[556]: Primary Header is updated. Mar 17 17:49:24.714819 disk-uuid[556]: Secondary Entries is updated. Mar 17 17:49:24.714819 disk-uuid[556]: Secondary Header is updated. Mar 17 17:49:24.718077 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:49:24.731103 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:25.736045 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:49:25.736705 disk-uuid[557]: The operation has completed successfully. Mar 17 17:49:25.757854 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:49:25.757951 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:49:25.805224 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:49:25.808238 sh[579]: Success Mar 17 17:49:25.825047 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:49:25.862926 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:49:25.881373 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:49:25.883472 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:49:25.892497 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 17:49:25.892536 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:49:25.892547 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:49:25.894472 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:49:25.894503 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:49:25.897623 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:49:25.898744 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:49:25.909230 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:49:25.911759 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:49:25.925077 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:49:25.925136 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:49:25.925147 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:49:25.928062 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:49:25.936854 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:49:25.938321 kernel: BTRFS info (device vda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:49:25.946163 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:49:25.952199 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:49:26.024041 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:49:26.034261 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:49:26.060081 systemd-networkd[771]: lo: Link UP Mar 17 17:49:26.060093 systemd-networkd[771]: lo: Gained carrier Mar 17 17:49:26.060930 systemd-networkd[771]: Enumeration completed Mar 17 17:49:26.061427 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:26.061432 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:49:26.062075 ignition[673]: Ignition 2.20.0 Mar 17 17:49:26.061454 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:49:26.062081 ignition[673]: Stage: fetch-offline Mar 17 17:49:26.062902 systemd[1]: Reached target network.target - Network. Mar 17 17:49:26.062116 ignition[673]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:26.063725 systemd-networkd[771]: eth0: Link UP Mar 17 17:49:26.062124 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:26.063728 systemd-networkd[771]: eth0: Gained carrier Mar 17 17:49:26.062273 ignition[673]: parsed url from cmdline: "" Mar 17 17:49:26.063736 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:26.062277 ignition[673]: no config URL provided Mar 17 17:49:26.062281 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:49:26.062288 ignition[673]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:49:26.062311 ignition[673]: op(1): [started] loading QEMU firmware config module Mar 17 17:49:26.062316 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:49:26.069872 ignition[673]: op(1): [finished] loading QEMU firmware config module Mar 17 17:49:26.081305 ignition[673]: parsing config with SHA512: b34ef782588fff0de06c34e7dc896f9f133167332143c3fa124c752cbd9e448819dc6058f527fa0cb8e9732f1d4622a85ca8c6b124933428d5690ba1fe313bd3 Mar 17 17:49:26.084989 unknown[673]: fetched base config from "system" Mar 17 17:49:26.085007 unknown[673]: fetched user config from "qemu" Mar 17 17:49:26.085274 ignition[673]: fetch-offline: fetch-offline passed Mar 17 17:49:26.085079 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:49:26.085348 ignition[673]: Ignition finished successfully Mar 17 17:49:26.086767 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:49:26.087969 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:49:26.098253 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:49:26.110950 ignition[781]: Ignition 2.20.0 Mar 17 17:49:26.110966 ignition[781]: Stage: kargs Mar 17 17:49:26.111160 ignition[781]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:26.111171 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:26.113511 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:49:26.111855 ignition[781]: kargs: kargs passed Mar 17 17:49:26.111903 ignition[781]: Ignition finished successfully Mar 17 17:49:26.124273 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:49:26.133506 ignition[791]: Ignition 2.20.0 Mar 17 17:49:26.133516 ignition[791]: Stage: disks Mar 17 17:49:26.133673 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:26.135978 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:49:26.133683 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:26.137090 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:49:26.134358 ignition[791]: disks: disks passed Mar 17 17:49:26.134403 ignition[791]: Ignition finished successfully Mar 17 17:49:26.140015 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:49:26.141356 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:49:26.142598 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:49:26.144131 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:49:26.146240 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:49:26.159793 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:49:26.163538 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:49:26.900100 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:49:26.941904 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:49:26.943306 kernel: EXT4-fs (vda9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 17:49:26.943003 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:49:26.955121 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:49:26.956611 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:49:26.957530 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:49:26.957570 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:49:26.966996 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (810) Mar 17 17:49:26.967039 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:49:26.967109 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:49:26.967125 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:49:26.957596 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:49:26.965448 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:49:26.970926 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:49:26.968524 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:49:26.971136 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:49:27.010565 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:49:27.014755 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:49:27.019153 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:49:27.026509 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:49:27.105543 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:49:27.116145 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:49:27.117528 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:49:27.123034 kernel: BTRFS info (device vda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:49:27.139531 ignition[923]: INFO : Ignition 2.20.0 Mar 17 17:49:27.139531 ignition[923]: INFO : Stage: mount Mar 17 17:49:27.141119 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:27.141119 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:27.141119 ignition[923]: INFO : mount: mount passed Mar 17 17:49:27.141119 ignition[923]: INFO : Ignition finished successfully Mar 17 17:49:27.142947 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:49:27.150137 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:49:27.151526 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:49:27.350191 systemd-networkd[771]: eth0: Gained IPv6LL Mar 17 17:49:27.891960 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:49:27.904292 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:49:27.910069 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (938) Mar 17 17:49:27.913258 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:49:27.913296 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:49:27.913307 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:49:27.915038 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:49:27.916568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:49:27.935148 ignition[955]: INFO : Ignition 2.20.0 Mar 17 17:49:27.935148 ignition[955]: INFO : Stage: files Mar 17 17:49:27.936578 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:27.936578 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:27.936578 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:49:27.941331 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:49:27.941331 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:49:27.941331 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:49:27.941331 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:49:27.941331 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:49:27.940458 unknown[955]: wrote ssh authorized keys file for user: core Mar 17 17:49:27.949520 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:49:27.949520 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:49:27.949520 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:49:27.949520 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:49:27.949520 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:49:27.949520 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:49:27.949520 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:49:27.949520 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 17 17:49:28.300157 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 17:49:28.600761 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:49:28.600761 ignition[955]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Mar 17 17:49:28.604818 ignition[955]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:49:28.604818 ignition[955]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:49:28.604818 ignition[955]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Mar 17 17:49:28.604818 ignition[955]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:49:28.618859 ignition[955]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:49:28.623941 ignition[955]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:49:28.625584 ignition[955]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:49:28.625584 ignition[955]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:49:28.629916 ignition[955]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:49:28.629916 ignition[955]: INFO : files: files passed Mar 17 17:49:28.629916 ignition[955]: INFO : Ignition finished successfully Mar 17 17:49:28.629300 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:49:28.642244 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:49:28.644311 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:49:28.648094 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:49:28.648275 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:49:28.657980 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:49:28.661875 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:28.661875 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:28.664898 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:28.665658 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:49:28.667532 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:49:28.680265 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:49:28.700406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:49:28.700524 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:49:28.702485 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:49:28.704135 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:49:28.705735 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:49:28.706666 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:49:28.721980 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:49:28.724406 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:49:28.735618 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:49:28.736863 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:49:28.738711 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:49:28.740243 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:49:28.740368 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:49:28.742626 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:49:28.744464 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:49:28.745987 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:49:28.747724 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:49:28.749577 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:49:28.751706 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:49:28.753480 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:49:28.755337 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:49:28.757195 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:49:28.758860 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:49:28.760292 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:49:28.760425 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:49:28.762589 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:49:28.764455 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:49:28.766259 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:49:28.767130 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:49:28.769006 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:49:28.769139 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:49:28.771655 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:49:28.771779 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:49:28.773704 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:49:28.775169 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:49:28.780086 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:49:28.781403 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:49:28.783113 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:49:28.784471 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:49:28.784556 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:49:28.785898 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:49:28.785973 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:49:28.787323 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:49:28.787435 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:49:28.789063 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:49:28.789164 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:49:28.802256 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:49:28.804069 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:49:28.804922 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:49:28.805088 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:49:28.806916 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:49:28.807008 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:49:28.813400 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:49:28.814068 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:49:28.817086 ignition[1009]: INFO : Ignition 2.20.0 Mar 17 17:49:28.817086 ignition[1009]: INFO : Stage: umount Mar 17 17:49:28.818537 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:28.818537 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:28.818537 ignition[1009]: INFO : umount: umount passed Mar 17 17:49:28.818537 ignition[1009]: INFO : Ignition finished successfully Mar 17 17:49:28.819882 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:49:28.820458 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:49:28.820548 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:49:28.822436 systemd[1]: Stopped target network.target - Network. Mar 17 17:49:28.823296 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:49:28.823375 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:49:28.824788 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:49:28.824837 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:49:28.826115 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:49:28.826158 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:49:28.828250 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:49:28.828311 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:49:28.829947 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:49:28.830882 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:49:28.832583 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:49:28.832681 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:49:28.834299 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:49:28.834396 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:49:28.840121 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:49:28.840225 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:49:28.844357 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:49:28.844571 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:49:28.844661 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:49:28.847738 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:49:28.848755 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:49:28.848796 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:49:28.861205 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:49:28.861883 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:49:28.861949 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:49:28.863618 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:49:28.863663 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:49:28.866093 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:49:28.866135 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:49:28.867517 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:49:28.867559 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:49:28.869734 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:49:28.877687 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:49:28.877809 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:49:28.890856 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:49:28.891009 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:49:28.894185 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:49:28.894254 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:49:28.894418 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:49:28.894453 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:49:28.896104 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:49:28.896136 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:49:28.898071 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:49:28.898125 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:49:28.900563 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:49:28.900614 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:49:28.903072 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:49:28.903143 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:28.916222 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:49:28.917418 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:49:28.917494 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:49:28.920137 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:49:28.920186 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:49:28.922218 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:49:28.922266 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:49:28.924058 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:49:28.924105 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:28.927515 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:49:28.927573 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:49:28.927929 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:49:28.928075 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:49:28.930635 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:49:28.948247 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:49:28.955915 systemd[1]: Switching root. Mar 17 17:49:28.982336 systemd-journald[239]: Journal stopped Mar 17 17:49:29.751922 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Mar 17 17:49:29.751975 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:49:29.751987 kernel: SELinux: policy capability open_perms=1 Mar 17 17:49:29.751997 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:49:29.752006 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:49:29.752018 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:49:29.752060 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:49:29.752070 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:49:29.752079 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:49:29.752092 kernel: audit: type=1403 audit(1742233769.107:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:49:29.752102 systemd[1]: Successfully loaded SELinux policy in 34.942ms. Mar 17 17:49:29.752122 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.170ms. Mar 17 17:49:29.752135 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:49:29.752148 systemd[1]: Detected virtualization kvm. Mar 17 17:49:29.752158 systemd[1]: Detected architecture arm64. Mar 17 17:49:29.752167 systemd[1]: Detected first boot. Mar 17 17:49:29.752177 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:49:29.752187 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:49:29.752196 zram_generator::config[1058]: No configuration found. Mar 17 17:49:29.752209 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:49:29.752220 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:49:29.752230 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:49:29.752239 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:49:29.752250 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:49:29.752260 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:49:29.752270 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:49:29.752280 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:49:29.752292 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:49:29.752302 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:49:29.752313 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:49:29.752323 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:49:29.752333 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:49:29.752344 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:49:29.752355 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:49:29.752366 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:49:29.752376 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:49:29.752388 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:49:29.752398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:49:29.752408 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:49:29.752418 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:49:29.752428 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:49:29.752438 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:49:29.752449 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:49:29.752459 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:49:29.752470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:49:29.752481 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:49:29.752491 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:49:29.752501 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:49:29.752511 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:49:29.752521 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:49:29.752531 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:49:29.752541 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:49:29.752552 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:49:29.752563 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:49:29.752574 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:49:29.752585 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:49:29.752595 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:49:29.752605 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:49:29.752615 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:49:29.752625 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:49:29.752635 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:49:29.752645 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:49:29.752656 systemd[1]: Reached target machines.target - Containers. Mar 17 17:49:29.752667 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:49:29.752677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:29.752687 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:49:29.752698 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:49:29.752708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:29.752724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:49:29.752739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:29.752752 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:49:29.752762 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:29.752773 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:49:29.752783 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:49:29.752794 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:49:29.752804 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:49:29.752816 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:49:29.752827 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:49:29.752838 kernel: fuse: init (API version 7.39) Mar 17 17:49:29.752852 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:49:29.752862 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:49:29.752872 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:49:29.752882 kernel: ACPI: bus type drm_connector registered Mar 17 17:49:29.752891 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:49:29.752901 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:49:29.752911 kernel: loop: module loaded Mar 17 17:49:29.752921 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:49:29.752932 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:49:29.752943 systemd[1]: Stopped verity-setup.service. Mar 17 17:49:29.752975 systemd-journald[1130]: Collecting audit messages is disabled. Mar 17 17:49:29.752996 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:49:29.753010 systemd-journald[1130]: Journal started Mar 17 17:49:29.753040 systemd-journald[1130]: Runtime Journal (/run/log/journal/83f55813692f4754a2b47accd8b6420d) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:49:29.548682 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:49:29.563166 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:49:29.563551 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:49:29.754068 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:49:29.755392 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:49:29.756339 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:49:29.757198 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:49:29.758231 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:49:29.759313 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:49:29.761077 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:49:29.762296 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:49:29.763580 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:49:29.763856 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:49:29.765119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:29.765374 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:29.766550 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:49:29.766701 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:49:29.767810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:29.767975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:29.769374 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:49:29.769531 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:49:29.770647 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:29.770829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:29.771989 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:49:29.773412 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:49:29.774778 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:49:29.776118 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:49:29.788151 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:49:29.797160 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:49:29.799395 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:49:29.800240 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:49:29.800283 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:49:29.801985 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:49:29.803978 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:49:29.805964 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:49:29.806909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:29.808339 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:49:29.810068 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:49:29.811014 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:49:29.814206 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:49:29.816169 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:49:29.817146 systemd-journald[1130]: Time spent on flushing to /var/log/journal/83f55813692f4754a2b47accd8b6420d is 18.267ms for 852 entries. Mar 17 17:49:29.817146 systemd-journald[1130]: System Journal (/var/log/journal/83f55813692f4754a2b47accd8b6420d) is 8M, max 195.6M, 187.6M free. Mar 17 17:49:29.840079 systemd-journald[1130]: Received client request to flush runtime journal. Mar 17 17:49:29.818862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:49:29.823277 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:49:29.826313 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:49:29.829558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:49:29.830803 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:49:29.831981 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:49:29.835211 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:49:29.840215 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:49:29.845446 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:49:29.849382 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:49:29.856228 kernel: loop0: detected capacity change from 0 to 113512 Mar 17 17:49:29.858211 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:49:29.859289 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 17 17:49:29.859299 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 17 17:49:29.861201 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:49:29.862998 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:49:29.867118 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:49:29.870339 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:49:29.879123 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:49:29.879288 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:49:29.885361 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:49:29.904112 kernel: loop1: detected capacity change from 0 to 123192 Mar 17 17:49:29.906538 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:49:29.913251 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:49:29.929166 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Mar 17 17:49:29.929184 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Mar 17 17:49:29.933396 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:49:29.937330 kernel: loop2: detected capacity change from 0 to 189592 Mar 17 17:49:29.972452 kernel: loop3: detected capacity change from 0 to 113512 Mar 17 17:49:29.978568 kernel: loop4: detected capacity change from 0 to 123192 Mar 17 17:49:29.985145 kernel: loop5: detected capacity change from 0 to 189592 Mar 17 17:49:29.990052 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:49:29.990443 (sd-merge)[1204]: Merged extensions into '/usr'. Mar 17 17:49:29.993742 systemd[1]: Reload requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:49:29.993758 systemd[1]: Reloading... Mar 17 17:49:30.043141 zram_generator::config[1232]: No configuration found. Mar 17 17:49:30.096377 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:49:30.132398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:49:30.182185 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:49:30.182386 systemd[1]: Reloading finished in 188 ms. Mar 17 17:49:30.206225 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:49:30.207364 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:49:30.217200 systemd[1]: Starting ensure-sysext.service... Mar 17 17:49:30.218763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:49:30.230796 systemd[1]: Reload requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:49:30.230810 systemd[1]: Reloading... Mar 17 17:49:30.236155 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:49:30.236358 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:49:30.236979 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:49:30.237304 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Mar 17 17:49:30.237359 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Mar 17 17:49:30.239968 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:49:30.239979 systemd-tmpfiles[1267]: Skipping /boot Mar 17 17:49:30.248617 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:49:30.248633 systemd-tmpfiles[1267]: Skipping /boot Mar 17 17:49:30.283805 zram_generator::config[1296]: No configuration found. Mar 17 17:49:30.368567 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:49:30.420350 systemd[1]: Reloading finished in 189 ms. Mar 17 17:49:30.431532 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:49:30.448056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:49:30.455895 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:49:30.458293 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:49:30.460425 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:49:30.463294 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:49:30.478285 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:49:30.480420 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:49:30.487068 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:49:30.490533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:30.495316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:30.499411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:30.503320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:30.504312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:30.504439 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:49:30.506331 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:49:30.511315 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:49:30.513082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:30.513252 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:30.514931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:30.517105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:30.527014 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:30.527238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:30.530539 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:49:30.532283 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Mar 17 17:49:30.534947 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:49:30.536732 systemd[1]: Finished ensure-sysext.service. Mar 17 17:49:30.537875 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:49:30.543565 augenrules[1367]: No rules Mar 17 17:49:30.545349 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:49:30.545609 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:49:30.550239 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:30.556196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:30.558094 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:49:30.563225 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:30.565252 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:30.565306 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:49:30.569231 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:49:30.571066 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:49:30.572779 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:49:30.575100 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:49:30.576388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:30.576664 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:30.580273 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:49:30.580446 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:49:30.581517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:30.581662 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:30.598374 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:49:30.599641 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:49:30.599754 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:49:30.625047 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1385) Mar 17 17:49:30.642262 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:49:30.648176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:49:30.659265 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:49:30.667765 systemd-resolved[1335]: Positive Trust Anchors: Mar 17 17:49:30.671779 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:49:30.671817 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:49:30.691942 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:49:30.695667 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:49:30.697211 systemd-resolved[1335]: Defaulting to hostname 'linux'. Mar 17 17:49:30.697479 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:49:30.698684 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:49:30.702298 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:49:30.740100 systemd-networkd[1407]: lo: Link UP Mar 17 17:49:30.740113 systemd-networkd[1407]: lo: Gained carrier Mar 17 17:49:30.744872 systemd-networkd[1407]: Enumeration completed Mar 17 17:49:30.744947 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:49:30.746175 systemd[1]: Reached target network.target - Network. Mar 17 17:49:30.748349 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:30.748362 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:49:30.750153 systemd-networkd[1407]: eth0: Link UP Mar 17 17:49:30.750161 systemd-networkd[1407]: eth0: Gained carrier Mar 17 17:49:30.750177 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:30.755227 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:49:30.757287 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:49:30.762265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:30.768105 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:49:30.770439 systemd-timesyncd[1392]: Network configuration changed, trying to establish connection. Mar 17 17:49:30.772949 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:49:30.773014 systemd-timesyncd[1392]: Initial clock synchronization to Mon 2025-03-17 17:49:30.927778 UTC. Mar 17 17:49:30.774245 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:49:30.776929 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:49:30.800465 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:49:30.812404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:30.817142 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:49:30.852772 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:49:30.854426 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:49:30.855702 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:49:30.857007 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:49:30.858380 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:49:30.859938 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:49:30.861383 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:49:30.862690 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:49:30.863956 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:49:30.863997 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:49:30.864905 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:49:30.867009 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:49:30.869607 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:49:30.873536 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:49:30.875039 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:49:30.877816 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:49:30.881179 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:49:30.883198 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:49:30.885822 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:49:30.887701 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:49:30.888965 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:49:30.889971 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:49:30.890967 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:49:30.891001 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:49:30.892070 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:49:30.894118 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:49:30.894536 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:49:30.897132 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:49:30.900262 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:49:30.901162 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:49:30.902225 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:49:30.907306 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:49:30.909563 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:49:30.911799 jq[1439]: false Mar 17 17:49:30.913175 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:49:30.916961 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:49:30.917434 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:49:30.922594 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:49:30.924663 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:49:30.928745 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:49:30.932904 jq[1449]: true Mar 17 17:49:30.933552 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:49:30.935061 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:49:30.935352 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:49:30.937089 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:49:30.942132 extend-filesystems[1440]: Found loop3 Mar 17 17:49:30.942963 extend-filesystems[1440]: Found loop4 Mar 17 17:49:30.942963 extend-filesystems[1440]: Found loop5 Mar 17 17:49:30.942963 extend-filesystems[1440]: Found vda Mar 17 17:49:30.942963 extend-filesystems[1440]: Found vda1 Mar 17 17:49:30.942963 extend-filesystems[1440]: Found vda2 Mar 17 17:49:30.942963 extend-filesystems[1440]: Found vda3 Mar 17 17:49:30.942963 extend-filesystems[1440]: Found usr Mar 17 17:49:30.942963 extend-filesystems[1440]: Found vda4 Mar 17 17:49:30.942963 extend-filesystems[1440]: Found vda6 Mar 17 17:49:30.942963 extend-filesystems[1440]: Found vda7 Mar 17 17:49:30.942963 extend-filesystems[1440]: Found vda9 Mar 17 17:49:30.942963 extend-filesystems[1440]: Checking size of /dev/vda9 Mar 17 17:49:30.958703 dbus-daemon[1438]: [system] SELinux support is enabled Mar 17 17:49:30.949264 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:49:30.961715 jq[1457]: true Mar 17 17:49:30.949520 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:49:30.959340 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:49:30.959632 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:49:30.962960 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:49:30.963016 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:49:30.964066 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:49:30.964091 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:49:30.976583 extend-filesystems[1440]: Resized partition /dev/vda9 Mar 17 17:49:30.981714 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:49:30.986756 update_engine[1447]: I20250317 17:49:30.986601 1447 main.cc:92] Flatcar Update Engine starting Mar 17 17:49:30.988055 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1397) Mar 17 17:49:30.990998 update_engine[1447]: I20250317 17:49:30.990952 1447 update_check_scheduler.cc:74] Next update check in 6m15s Mar 17 17:49:30.991514 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:49:31.001238 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:49:31.005407 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:49:31.008296 systemd-logind[1445]: New seat seat0. Mar 17 17:49:31.010866 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:49:31.039054 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:49:31.132072 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:49:31.135939 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:49:31.157477 extend-filesystems[1474]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:49:31.157477 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:49:31.157477 extend-filesystems[1474]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:49:31.161816 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Mar 17 17:49:31.160196 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:49:31.166079 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:49:31.160399 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:49:31.168104 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:49:31.170233 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:49:31.230726 containerd[1466]: time="2025-03-17T17:49:31.230634934Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:49:31.259296 containerd[1466]: time="2025-03-17T17:49:31.259226019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:31.260655 containerd[1466]: time="2025-03-17T17:49:31.260604483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:31.260655 containerd[1466]: time="2025-03-17T17:49:31.260638611Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:49:31.260655 containerd[1466]: time="2025-03-17T17:49:31.260657123Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:49:31.260867 containerd[1466]: time="2025-03-17T17:49:31.260832249Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:49:31.260867 containerd[1466]: time="2025-03-17T17:49:31.260856755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:31.260931 containerd[1466]: time="2025-03-17T17:49:31.260914206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:31.260931 containerd[1466]: time="2025-03-17T17:49:31.260925868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:31.261183 containerd[1466]: time="2025-03-17T17:49:31.261148293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:31.261183 containerd[1466]: time="2025-03-17T17:49:31.261170882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:31.261224 containerd[1466]: time="2025-03-17T17:49:31.261188048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:31.261224 containerd[1466]: time="2025-03-17T17:49:31.261197874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:31.261288 containerd[1466]: time="2025-03-17T17:49:31.261273022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:31.261492 containerd[1466]: time="2025-03-17T17:49:31.261466171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:31.261620 containerd[1466]: time="2025-03-17T17:49:31.261597628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:31.261620 containerd[1466]: time="2025-03-17T17:49:31.261616017Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:49:31.261730 containerd[1466]: time="2025-03-17T17:49:31.261713794Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:49:31.261797 containerd[1466]: time="2025-03-17T17:49:31.261767535Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:49:31.270560 containerd[1466]: time="2025-03-17T17:49:31.270513789Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:49:31.270620 containerd[1466]: time="2025-03-17T17:49:31.270572749Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:49:31.270620 containerd[1466]: time="2025-03-17T17:49:31.270590323Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:49:31.270620 containerd[1466]: time="2025-03-17T17:49:31.270606837Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:49:31.270695 containerd[1466]: time="2025-03-17T17:49:31.270621067Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:49:31.270828 containerd[1466]: time="2025-03-17T17:49:31.270793869Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271111136Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271277944Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271296497Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271314234Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271330095Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271343510Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271357291Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271371970Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271386690Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271399942Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271412704Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271424447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271447118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274133 containerd[1466]: time="2025-03-17T17:49:31.271461063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271473009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271485650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271497026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271509829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271522795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271539186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271552968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271568666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271580491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271593417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271606138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271621714Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271642631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271656780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274439 containerd[1466]: time="2025-03-17T17:49:31.271674517Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:49:31.274675 containerd[1466]: time="2025-03-17T17:49:31.271922875Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:49:31.274675 containerd[1466]: time="2025-03-17T17:49:31.271945790Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:49:31.274675 containerd[1466]: time="2025-03-17T17:49:31.271957574Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:49:31.274675 containerd[1466]: time="2025-03-17T17:49:31.271970499Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:49:31.274675 containerd[1466]: time="2025-03-17T17:49:31.271980611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274675 containerd[1466]: time="2025-03-17T17:49:31.271999775Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:49:31.274675 containerd[1466]: time="2025-03-17T17:49:31.272010744Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:49:31.274675 containerd[1466]: time="2025-03-17T17:49:31.272021019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.272417184Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.272471088Z" level=info msg="Connect containerd service" Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.272504931Z" level=info msg="using legacy CRI server" Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.272511740Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.272743747Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.273480257Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.273728574Z" level=info msg="Start subscribing containerd event" Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.273777422Z" level=info msg="Start recovering state" Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.273847921Z" level=info msg="Start event monitor" Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.273858767Z" level=info msg="Start snapshots syncer" Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.273867615Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:49:31.274809 containerd[1466]: time="2025-03-17T17:49:31.273876789Z" level=info msg="Start streaming server" Mar 17 17:49:31.275242 containerd[1466]: time="2025-03-17T17:49:31.275215661Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:49:31.275352 containerd[1466]: time="2025-03-17T17:49:31.275336027Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:49:31.275533 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:49:31.277615 containerd[1466]: time="2025-03-17T17:49:31.277582947Z" level=info msg="containerd successfully booted in 0.048335s" Mar 17 17:49:31.767357 systemd-networkd[1407]: eth0: Gained IPv6LL Mar 17 17:49:31.769916 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:49:31.773323 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:49:31.785980 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:49:31.788929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:49:31.791965 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:49:31.824365 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:49:31.826599 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:49:31.827244 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:49:31.830271 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:49:32.171832 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:49:32.191283 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:49:32.203345 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:49:32.209974 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:49:32.210221 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:49:32.215010 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:49:32.226956 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:49:32.229775 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:49:32.232096 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:49:32.233664 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:49:32.288199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:49:32.289771 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:49:32.290946 systemd[1]: Startup finished in 567ms (kernel) + 5.409s (initrd) + 3.219s (userspace) = 9.195s. Mar 17 17:49:32.293219 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:49:32.720013 kubelet[1546]: E0317 17:49:32.719949 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:49:32.722547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:49:32.722690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:49:32.723225 systemd[1]: kubelet.service: Consumed 777ms CPU time, 233.5M memory peak. Mar 17 17:49:36.360238 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:49:36.361774 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:48142.service - OpenSSH per-connection server daemon (10.0.0.1:48142). Mar 17 17:49:36.454283 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 48142 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:49:36.456048 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:36.465453 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:49:36.473303 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:49:36.478980 systemd-logind[1445]: New session 1 of user core. Mar 17 17:49:36.483487 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:49:36.486002 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:49:36.499729 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:49:36.503267 systemd-logind[1445]: New session c1 of user core. Mar 17 17:49:36.627049 systemd[1563]: Queued start job for default target default.target. Mar 17 17:49:36.638959 systemd[1563]: Created slice app.slice - User Application Slice. Mar 17 17:49:36.638986 systemd[1563]: Reached target paths.target - Paths. Mar 17 17:49:36.639023 systemd[1563]: Reached target timers.target - Timers. Mar 17 17:49:36.640275 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:49:36.649166 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:49:36.649226 systemd[1563]: Reached target sockets.target - Sockets. Mar 17 17:49:36.649262 systemd[1563]: Reached target basic.target - Basic System. Mar 17 17:49:36.649290 systemd[1563]: Reached target default.target - Main User Target. Mar 17 17:49:36.649315 systemd[1563]: Startup finished in 135ms. Mar 17 17:49:36.649440 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:49:36.657180 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:49:36.728361 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:48150.service - OpenSSH per-connection server daemon (10.0.0.1:48150). Mar 17 17:49:36.767834 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 48150 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:49:36.769048 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:36.773235 systemd-logind[1445]: New session 2 of user core. Mar 17 17:49:36.780188 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:49:36.833804 sshd[1576]: Connection closed by 10.0.0.1 port 48150 Mar 17 17:49:36.834293 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:36.845178 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:48150.service: Deactivated successfully. Mar 17 17:49:36.846558 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:49:36.848200 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:49:36.857285 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:48156.service - OpenSSH per-connection server daemon (10.0.0.1:48156). Mar 17 17:49:36.858297 systemd-logind[1445]: Removed session 2. Mar 17 17:49:36.892915 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 48156 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:49:36.894074 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:36.900436 systemd-logind[1445]: New session 3 of user core. Mar 17 17:49:36.911267 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:49:36.961747 sshd[1584]: Connection closed by 10.0.0.1 port 48156 Mar 17 17:49:36.962079 sshd-session[1581]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:36.979399 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:48156.service: Deactivated successfully. Mar 17 17:49:36.980877 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:49:36.982458 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:49:36.991325 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:48158.service - OpenSSH per-connection server daemon (10.0.0.1:48158). Mar 17 17:49:36.992615 systemd-logind[1445]: Removed session 3. Mar 17 17:49:37.028599 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 48158 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:49:37.029846 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:37.034450 systemd-logind[1445]: New session 4 of user core. Mar 17 17:49:37.045224 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:49:37.099568 sshd[1592]: Connection closed by 10.0.0.1 port 48158 Mar 17 17:49:37.100153 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:37.106932 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:48158.service: Deactivated successfully. Mar 17 17:49:37.108406 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:49:37.110308 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:49:37.127384 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:48174.service - OpenSSH per-connection server daemon (10.0.0.1:48174). Mar 17 17:49:37.129229 systemd-logind[1445]: Removed session 4. Mar 17 17:49:37.165715 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 48174 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:49:37.167158 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:37.172433 systemd-logind[1445]: New session 5 of user core. Mar 17 17:49:37.182253 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:49:37.252685 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:49:37.252987 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:37.270023 sudo[1601]: pam_unix(sudo:session): session closed for user root Mar 17 17:49:37.271655 sshd[1600]: Connection closed by 10.0.0.1 port 48174 Mar 17 17:49:37.272253 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:37.290599 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:48174.service: Deactivated successfully. Mar 17 17:49:37.293363 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:49:37.294308 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:49:37.306393 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:48190.service - OpenSSH per-connection server daemon (10.0.0.1:48190). Mar 17 17:49:37.307523 systemd-logind[1445]: Removed session 5. Mar 17 17:49:37.346342 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 48190 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:49:37.347513 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:37.352042 systemd-logind[1445]: New session 6 of user core. Mar 17 17:49:37.364558 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:49:37.418451 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:49:37.419195 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:37.422501 sudo[1611]: pam_unix(sudo:session): session closed for user root Mar 17 17:49:37.428333 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:49:37.428617 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:37.448384 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:49:37.476421 augenrules[1633]: No rules Mar 17 17:49:37.477087 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:49:37.478134 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:49:37.479506 sudo[1610]: pam_unix(sudo:session): session closed for user root Mar 17 17:49:37.480870 sshd[1609]: Connection closed by 10.0.0.1 port 48190 Mar 17 17:49:37.481575 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:37.495520 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:48190.service: Deactivated successfully. Mar 17 17:49:37.497129 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:49:37.498973 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:49:37.500201 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:48192.service - OpenSSH per-connection server daemon (10.0.0.1:48192). Mar 17 17:49:37.500989 systemd-logind[1445]: Removed session 6. Mar 17 17:49:37.544067 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 48192 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:49:37.545324 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:37.550158 systemd-logind[1445]: New session 7 of user core. Mar 17 17:49:37.561239 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:49:37.615653 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:49:37.616284 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:37.642437 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:49:37.662214 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:49:37.662480 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:49:38.293458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:49:38.293622 systemd[1]: kubelet.service: Consumed 777ms CPU time, 233.5M memory peak. Mar 17 17:49:38.311359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:49:38.335011 systemd[1]: Reload requested from client PID 1685 ('systemctl') (unit session-7.scope)... Mar 17 17:49:38.335037 systemd[1]: Reloading... Mar 17 17:49:38.438389 zram_generator::config[1731]: No configuration found. Mar 17 17:49:38.614544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:49:38.693687 systemd[1]: Reloading finished in 358 ms. Mar 17 17:49:38.741145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:49:38.744285 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:49:38.744927 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:49:38.745197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:49:38.745249 systemd[1]: kubelet.service: Consumed 83ms CPU time, 82.4M memory peak. Mar 17 17:49:38.757347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:49:38.852998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:49:38.856994 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:49:38.897233 kubelet[1775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:49:38.897233 kubelet[1775]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:49:38.897233 kubelet[1775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:49:38.897554 kubelet[1775]: I0317 17:49:38.897346 1775 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:49:39.786970 kubelet[1775]: I0317 17:49:39.786909 1775 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:49:39.786970 kubelet[1775]: I0317 17:49:39.786949 1775 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:49:39.787240 kubelet[1775]: I0317 17:49:39.787211 1775 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:49:39.827059 kubelet[1775]: I0317 17:49:39.826872 1775 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:49:39.840852 kubelet[1775]: E0317 17:49:39.840807 1775 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:49:39.840852 kubelet[1775]: I0317 17:49:39.840848 1775 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:49:39.848069 kubelet[1775]: I0317 17:49:39.848016 1775 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:49:39.848341 kubelet[1775]: I0317 17:49:39.848321 1775 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:49:39.848475 kubelet[1775]: I0317 17:49:39.848441 1775 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:49:39.848636 kubelet[1775]: I0317 17:49:39.848470 1775 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.117","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:49:39.848725 kubelet[1775]: I0317 17:49:39.848702 1775 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:49:39.848725 kubelet[1775]: I0317 17:49:39.848711 1775 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:49:39.848908 kubelet[1775]: I0317 17:49:39.848886 1775 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:49:39.851405 kubelet[1775]: I0317 17:49:39.851376 1775 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:49:39.851439 kubelet[1775]: I0317 17:49:39.851411 1775 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:49:39.851518 kubelet[1775]: I0317 17:49:39.851503 1775 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:49:39.851518 kubelet[1775]: I0317 17:49:39.851514 1775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:49:39.851938 kubelet[1775]: E0317 17:49:39.851699 1775 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:39.851938 kubelet[1775]: E0317 17:49:39.851748 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:39.855961 kubelet[1775]: I0317 17:49:39.855918 1775 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:49:39.861346 kubelet[1775]: I0317 17:49:39.861301 1775 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:49:39.862049 kubelet[1775]: W0317 17:49:39.862003 1775 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:49:39.862788 kubelet[1775]: I0317 17:49:39.862763 1775 server.go:1269] "Started kubelet" Mar 17 17:49:39.863670 kubelet[1775]: I0317 17:49:39.862948 1775 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:49:39.863670 kubelet[1775]: I0317 17:49:39.863094 1775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:49:39.863670 kubelet[1775]: I0317 17:49:39.863343 1775 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:49:39.864455 kubelet[1775]: I0317 17:49:39.864385 1775 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:49:39.865134 kubelet[1775]: I0317 17:49:39.864923 1775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:49:39.867550 kubelet[1775]: I0317 17:49:39.866095 1775 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:49:39.867550 kubelet[1775]: W0317 17:49:39.866266 1775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.117" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:49:39.867550 kubelet[1775]: E0317 17:49:39.866303 1775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.117\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 17:49:39.867550 kubelet[1775]: W0317 17:49:39.866447 1775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:49:39.867550 kubelet[1775]: E0317 17:49:39.866465 1775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 17:49:39.867550 kubelet[1775]: E0317 17:49:39.866752 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:39.867550 kubelet[1775]: I0317 17:49:39.866986 1775 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:49:39.867550 kubelet[1775]: I0317 17:49:39.867247 1775 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:49:39.867550 kubelet[1775]: I0317 17:49:39.867317 1775 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:49:39.869692 kubelet[1775]: E0317 17:49:39.869593 1775 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:49:39.870261 kubelet[1775]: I0317 17:49:39.870235 1775 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:49:39.870343 kubelet[1775]: I0317 17:49:39.870333 1775 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:49:39.871044 kubelet[1775]: I0317 17:49:39.870507 1775 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:49:39.882195 kubelet[1775]: E0317 17:49:39.882151 1775 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.117\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Mar 17 17:49:39.883585 kubelet[1775]: I0317 17:49:39.883567 1775 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:49:39.883585 kubelet[1775]: I0317 17:49:39.883584 1775 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:49:39.883670 kubelet[1775]: I0317 17:49:39.883609 1775 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:49:39.897853 kubelet[1775]: W0317 17:49:39.897753 1775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 17:49:39.897853 kubelet[1775]: E0317 17:49:39.897791 1775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 17 17:49:39.948898 kubelet[1775]: I0317 17:49:39.948829 1775 policy_none.go:49] "None policy: Start" Mar 17 17:49:39.953067 kubelet[1775]: E0317 17:49:39.895836 1775 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.117.182da8645c675da3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.117,UID:10.0.0.117,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.117,},FirstTimestamp:2025-03-17 17:49:39.862724003 +0000 UTC m=+1.001991891,LastTimestamp:2025-03-17 17:49:39.862724003 +0000 UTC m=+1.001991891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.117,}" Mar 17 17:49:39.954453 kubelet[1775]: I0317 17:49:39.954418 1775 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:49:39.954514 kubelet[1775]: I0317 17:49:39.954497 1775 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:49:39.954750 kubelet[1775]: E0317 17:49:39.954598 1775 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.117.182da8645ccffa63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.117,UID:10.0.0.117,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.117,},FirstTimestamp:2025-03-17 17:49:39.869579875 +0000 UTC m=+1.008847763,LastTimestamp:2025-03-17 17:49:39.869579875 +0000 UTC m=+1.008847763,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.117,}" Mar 17 17:49:39.967052 kubelet[1775]: E0317 17:49:39.967013 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:39.967849 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:49:39.981233 kubelet[1775]: I0317 17:49:39.981079 1775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:49:39.982206 kubelet[1775]: I0317 17:49:39.982186 1775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:49:39.982252 kubelet[1775]: I0317 17:49:39.982217 1775 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:49:39.982252 kubelet[1775]: I0317 17:49:39.982236 1775 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:49:39.982320 kubelet[1775]: E0317 17:49:39.982282 1775 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:49:39.985334 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:49:39.988606 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:49:39.996333 kubelet[1775]: I0317 17:49:39.995855 1775 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:49:39.996333 kubelet[1775]: I0317 17:49:39.996084 1775 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:49:39.996333 kubelet[1775]: I0317 17:49:39.996095 1775 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:49:39.996333 kubelet[1775]: I0317 17:49:39.996278 1775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:49:39.998005 kubelet[1775]: E0317 17:49:39.997969 1775 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.117\" not found" Mar 17 17:49:40.091549 kubelet[1775]: E0317 17:49:40.091410 1775 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.117\" not found" node="10.0.0.117" Mar 17 17:49:40.097119 kubelet[1775]: I0317 17:49:40.097089 1775 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.117" Mar 17 17:49:40.103412 kubelet[1775]: I0317 17:49:40.103370 1775 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.117" Mar 17 17:49:40.103412 kubelet[1775]: E0317 17:49:40.103403 1775 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.117\": node \"10.0.0.117\" not found" Mar 17 17:49:40.111490 kubelet[1775]: E0317 17:49:40.111382 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:40.211611 kubelet[1775]: E0317 17:49:40.211528 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:40.312568 kubelet[1775]: E0317 17:49:40.312530 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:40.384485 sudo[1645]: pam_unix(sudo:session): session closed for user root Mar 17 17:49:40.387180 sshd[1644]: Connection closed by 10.0.0.1 port 48192 Mar 17 17:49:40.387016 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:40.390013 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:48192.service: Deactivated successfully. Mar 17 17:49:40.392468 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:49:40.392666 systemd[1]: session-7.scope: Consumed 506ms CPU time, 74.6M memory peak. Mar 17 17:49:40.393716 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:49:40.394526 systemd-logind[1445]: Removed session 7. Mar 17 17:49:40.413542 kubelet[1775]: E0317 17:49:40.413399 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:40.513571 kubelet[1775]: E0317 17:49:40.513510 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:40.614464 kubelet[1775]: E0317 17:49:40.614398 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:40.715194 kubelet[1775]: E0317 17:49:40.715069 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:40.788883 kubelet[1775]: I0317 17:49:40.788822 1775 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 17:49:40.789063 kubelet[1775]: W0317 17:49:40.788981 1775 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:49:40.815800 kubelet[1775]: E0317 17:49:40.815740 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:40.852556 kubelet[1775]: E0317 17:49:40.852503 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:40.916675 kubelet[1775]: E0317 17:49:40.916614 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:41.017294 kubelet[1775]: E0317 17:49:41.017164 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:41.117974 kubelet[1775]: E0317 17:49:41.117910 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:41.218584 kubelet[1775]: E0317 17:49:41.218541 1775 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.117\" not found" Mar 17 17:49:41.319591 kubelet[1775]: I0317 17:49:41.319492 1775 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 17:49:41.319855 containerd[1466]: time="2025-03-17T17:49:41.319820481Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:49:41.320411 kubelet[1775]: I0317 17:49:41.319992 1775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 17:49:41.852620 kubelet[1775]: E0317 17:49:41.852584 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:41.852620 kubelet[1775]: I0317 17:49:41.852605 1775 apiserver.go:52] "Watching apiserver" Mar 17 17:49:41.856480 kubelet[1775]: E0317 17:49:41.856435 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zrt5z" podUID="e545164e-181a-4d15-92e4-f279fff55ecc" Mar 17 17:49:41.863040 systemd[1]: Created slice kubepods-besteffort-pode513b76b_9d3a_4769_97aa_55d64a83435d.slice - libcontainer container kubepods-besteffort-pode513b76b_9d3a_4769_97aa_55d64a83435d.slice. Mar 17 17:49:41.867978 kubelet[1775]: I0317 17:49:41.867927 1775 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:49:41.872860 systemd[1]: Created slice kubepods-besteffort-poda97f4c47_4ea6_477a_b35e_5bc284aa2e88.slice - libcontainer container kubepods-besteffort-poda97f4c47_4ea6_477a_b35e_5bc284aa2e88.slice. Mar 17 17:49:41.879628 kubelet[1775]: I0317 17:49:41.879504 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-var-run-calico\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.879628 kubelet[1775]: I0317 17:49:41.879544 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-cni-bin-dir\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.879628 kubelet[1775]: I0317 17:49:41.879561 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-cni-net-dir\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.879628 kubelet[1775]: I0317 17:49:41.879584 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e513b76b-9d3a-4769-97aa-55d64a83435d-kube-proxy\") pod \"kube-proxy-v75l5\" (UID: \"e513b76b-9d3a-4769-97aa-55d64a83435d\") " pod="kube-system/kube-proxy-v75l5" Mar 17 17:49:41.879628 kubelet[1775]: I0317 17:49:41.879601 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e513b76b-9d3a-4769-97aa-55d64a83435d-xtables-lock\") pod \"kube-proxy-v75l5\" (UID: \"e513b76b-9d3a-4769-97aa-55d64a83435d\") " pod="kube-system/kube-proxy-v75l5" Mar 17 17:49:41.879923 kubelet[1775]: I0317 17:49:41.879849 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e513b76b-9d3a-4769-97aa-55d64a83435d-lib-modules\") pod \"kube-proxy-v75l5\" (UID: \"e513b76b-9d3a-4769-97aa-55d64a83435d\") " pod="kube-system/kube-proxy-v75l5" Mar 17 17:49:41.879923 kubelet[1775]: I0317 17:49:41.879884 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-policysync\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.880246 kubelet[1775]: I0317 17:49:41.879928 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-var-lib-calico\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.880246 kubelet[1775]: I0317 17:49:41.879958 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-cni-log-dir\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.880246 kubelet[1775]: I0317 17:49:41.879976 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e545164e-181a-4d15-92e4-f279fff55ecc-varrun\") pod \"csi-node-driver-zrt5z\" (UID: \"e545164e-181a-4d15-92e4-f279fff55ecc\") " pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:41.880246 kubelet[1775]: I0317 17:49:41.879991 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e545164e-181a-4d15-92e4-f279fff55ecc-kubelet-dir\") pod \"csi-node-driver-zrt5z\" (UID: \"e545164e-181a-4d15-92e4-f279fff55ecc\") " pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:41.880246 kubelet[1775]: I0317 17:49:41.880017 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e545164e-181a-4d15-92e4-f279fff55ecc-socket-dir\") pod \"csi-node-driver-zrt5z\" (UID: \"e545164e-181a-4d15-92e4-f279fff55ecc\") " pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:41.880368 kubelet[1775]: I0317 17:49:41.880050 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-xtables-lock\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.880368 kubelet[1775]: I0317 17:49:41.880066 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-flexvol-driver-host\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.880368 kubelet[1775]: I0317 17:49:41.880081 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmr6\" (UniqueName: \"kubernetes.io/projected/e513b76b-9d3a-4769-97aa-55d64a83435d-kube-api-access-pdmr6\") pod \"kube-proxy-v75l5\" (UID: \"e513b76b-9d3a-4769-97aa-55d64a83435d\") " pod="kube-system/kube-proxy-v75l5" Mar 17 17:49:41.880368 kubelet[1775]: I0317 17:49:41.880095 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-lib-modules\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.880368 kubelet[1775]: I0317 17:49:41.880111 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-tigera-ca-bundle\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.880462 kubelet[1775]: I0317 17:49:41.880129 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-node-certs\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.880462 kubelet[1775]: I0317 17:49:41.880161 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7ghf\" (UniqueName: \"kubernetes.io/projected/a97f4c47-4ea6-477a-b35e-5bc284aa2e88-kube-api-access-t7ghf\") pod \"calico-node-8phd5\" (UID: \"a97f4c47-4ea6-477a-b35e-5bc284aa2e88\") " pod="calico-system/calico-node-8phd5" Mar 17 17:49:41.880462 kubelet[1775]: I0317 17:49:41.880180 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e545164e-181a-4d15-92e4-f279fff55ecc-registration-dir\") pod \"csi-node-driver-zrt5z\" (UID: \"e545164e-181a-4d15-92e4-f279fff55ecc\") " pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:41.880462 kubelet[1775]: I0317 17:49:41.880196 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tbw7\" (UniqueName: \"kubernetes.io/projected/e545164e-181a-4d15-92e4-f279fff55ecc-kube-api-access-8tbw7\") pod \"csi-node-driver-zrt5z\" (UID: \"e545164e-181a-4d15-92e4-f279fff55ecc\") " pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:41.982018 kubelet[1775]: E0317 17:49:41.981893 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.982018 kubelet[1775]: W0317 17:49:41.981912 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.982018 kubelet[1775]: E0317 17:49:41.981950 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.982399 kubelet[1775]: E0317 17:49:41.982144 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.982399 kubelet[1775]: W0317 17:49:41.982152 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.982399 kubelet[1775]: E0317 17:49:41.982168 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.982399 kubelet[1775]: E0317 17:49:41.982367 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.982399 kubelet[1775]: W0317 17:49:41.982375 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.982399 kubelet[1775]: E0317 17:49:41.982392 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.983278 kubelet[1775]: E0317 17:49:41.983147 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.983278 kubelet[1775]: W0317 17:49:41.983236 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.983278 kubelet[1775]: E0317 17:49:41.983264 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.983458 kubelet[1775]: E0317 17:49:41.983443 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.983458 kubelet[1775]: W0317 17:49:41.983458 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.983533 kubelet[1775]: E0317 17:49:41.983474 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.983666 kubelet[1775]: E0317 17:49:41.983623 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.983666 kubelet[1775]: W0317 17:49:41.983631 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.983745 kubelet[1775]: E0317 17:49:41.983669 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.983794 kubelet[1775]: E0317 17:49:41.983786 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.983824 kubelet[1775]: W0317 17:49:41.983795 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.983868 kubelet[1775]: E0317 17:49:41.983839 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.983994 kubelet[1775]: E0317 17:49:41.983984 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.983994 kubelet[1775]: W0317 17:49:41.983993 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.984149 kubelet[1775]: E0317 17:49:41.984083 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.984149 kubelet[1775]: E0317 17:49:41.984140 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.984149 kubelet[1775]: W0317 17:49:41.984148 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.984384 kubelet[1775]: E0317 17:49:41.984288 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.984384 kubelet[1775]: E0317 17:49:41.984357 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.984384 kubelet[1775]: W0317 17:49:41.984366 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.984471 kubelet[1775]: E0317 17:49:41.984401 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.984543 kubelet[1775]: E0317 17:49:41.984528 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.984543 kubelet[1775]: W0317 17:49:41.984539 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.984629 kubelet[1775]: E0317 17:49:41.984560 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.984874 kubelet[1775]: E0317 17:49:41.984856 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.984874 kubelet[1775]: W0317 17:49:41.984868 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.984874 kubelet[1775]: E0317 17:49:41.984894 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.985066 kubelet[1775]: E0317 17:49:41.985021 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.985066 kubelet[1775]: W0317 17:49:41.985047 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.985156 kubelet[1775]: E0317 17:49:41.985105 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.985326 kubelet[1775]: E0317 17:49:41.985312 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.985326 kubelet[1775]: W0317 17:49:41.985322 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.985566 kubelet[1775]: E0317 17:49:41.985405 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.985664 kubelet[1775]: E0317 17:49:41.985649 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.985664 kubelet[1775]: W0317 17:49:41.985661 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.985790 kubelet[1775]: E0317 17:49:41.985698 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.985895 kubelet[1775]: E0317 17:49:41.985882 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.985895 kubelet[1775]: W0317 17:49:41.985893 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.985950 kubelet[1775]: E0317 17:49:41.985937 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.986084 kubelet[1775]: E0317 17:49:41.986073 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.986084 kubelet[1775]: W0317 17:49:41.986083 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.986139 kubelet[1775]: E0317 17:49:41.986121 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.986258 kubelet[1775]: E0317 17:49:41.986248 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.986258 kubelet[1775]: W0317 17:49:41.986256 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.986331 kubelet[1775]: E0317 17:49:41.986319 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.986437 kubelet[1775]: E0317 17:49:41.986428 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.986463 kubelet[1775]: W0317 17:49:41.986437 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.986494 kubelet[1775]: E0317 17:49:41.986482 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.986605 kubelet[1775]: E0317 17:49:41.986595 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.986628 kubelet[1775]: W0317 17:49:41.986605 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.986653 kubelet[1775]: E0317 17:49:41.986640 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.986762 kubelet[1775]: E0317 17:49:41.986746 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.986792 kubelet[1775]: W0317 17:49:41.986762 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.986814 kubelet[1775]: E0317 17:49:41.986804 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.986988 kubelet[1775]: E0317 17:49:41.986976 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.987015 kubelet[1775]: W0317 17:49:41.986988 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.987015 kubelet[1775]: E0317 17:49:41.987007 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.987240 kubelet[1775]: E0317 17:49:41.987228 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.987240 kubelet[1775]: W0317 17:49:41.987238 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.987284 kubelet[1775]: E0317 17:49:41.987269 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.987388 kubelet[1775]: E0317 17:49:41.987378 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.987411 kubelet[1775]: W0317 17:49:41.987387 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.987411 kubelet[1775]: E0317 17:49:41.987405 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.987513 kubelet[1775]: E0317 17:49:41.987504 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.987535 kubelet[1775]: W0317 17:49:41.987516 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.987555 kubelet[1775]: E0317 17:49:41.987535 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.987641 kubelet[1775]: E0317 17:49:41.987632 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.987666 kubelet[1775]: W0317 17:49:41.987641 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.987666 kubelet[1775]: E0317 17:49:41.987657 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.987776 kubelet[1775]: E0317 17:49:41.987758 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.987806 kubelet[1775]: W0317 17:49:41.987775 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.987806 kubelet[1775]: E0317 17:49:41.987796 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.987915 kubelet[1775]: E0317 17:49:41.987905 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.987941 kubelet[1775]: W0317 17:49:41.987914 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.987941 kubelet[1775]: E0317 17:49:41.987932 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.988060 kubelet[1775]: E0317 17:49:41.988045 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.988060 kubelet[1775]: W0317 17:49:41.988055 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.988108 kubelet[1775]: E0317 17:49:41.988071 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.988190 kubelet[1775]: E0317 17:49:41.988179 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.988190 kubelet[1775]: W0317 17:49:41.988188 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.988239 kubelet[1775]: E0317 17:49:41.988205 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.988317 kubelet[1775]: E0317 17:49:41.988308 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.988338 kubelet[1775]: W0317 17:49:41.988316 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.988359 kubelet[1775]: E0317 17:49:41.988332 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.988446 kubelet[1775]: E0317 17:49:41.988436 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.988471 kubelet[1775]: W0317 17:49:41.988445 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.988471 kubelet[1775]: E0317 17:49:41.988463 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.988571 kubelet[1775]: E0317 17:49:41.988562 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.988591 kubelet[1775]: W0317 17:49:41.988570 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.988610 kubelet[1775]: E0317 17:49:41.988590 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.988711 kubelet[1775]: E0317 17:49:41.988701 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.988734 kubelet[1775]: W0317 17:49:41.988711 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.988734 kubelet[1775]: E0317 17:49:41.988727 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.988847 kubelet[1775]: E0317 17:49:41.988837 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.988847 kubelet[1775]: W0317 17:49:41.988847 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.988897 kubelet[1775]: E0317 17:49:41.988864 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.989030 kubelet[1775]: E0317 17:49:41.989017 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.989071 kubelet[1775]: W0317 17:49:41.989062 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.989097 kubelet[1775]: E0317 17:49:41.989083 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.989214 kubelet[1775]: E0317 17:49:41.989203 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.989236 kubelet[1775]: W0317 17:49:41.989213 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.989236 kubelet[1775]: E0317 17:49:41.989231 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.989343 kubelet[1775]: E0317 17:49:41.989333 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.989369 kubelet[1775]: W0317 17:49:41.989343 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.989369 kubelet[1775]: E0317 17:49:41.989360 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.989503 kubelet[1775]: E0317 17:49:41.989462 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.989503 kubelet[1775]: W0317 17:49:41.989469 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.989503 kubelet[1775]: E0317 17:49:41.989486 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.990186 kubelet[1775]: E0317 17:49:41.990092 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.990186 kubelet[1775]: W0317 17:49:41.990105 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.990186 kubelet[1775]: E0317 17:49:41.990157 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.990344 kubelet[1775]: E0317 17:49:41.990335 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.990389 kubelet[1775]: W0317 17:49:41.990379 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.990454 kubelet[1775]: E0317 17:49:41.990440 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.990614 kubelet[1775]: E0317 17:49:41.990604 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.990681 kubelet[1775]: W0317 17:49:41.990664 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.990761 kubelet[1775]: E0317 17:49:41.990740 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.990994 kubelet[1775]: E0317 17:49:41.990915 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.990994 kubelet[1775]: W0317 17:49:41.990930 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.990994 kubelet[1775]: E0317 17:49:41.990957 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.991145 kubelet[1775]: E0317 17:49:41.991134 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.991201 kubelet[1775]: W0317 17:49:41.991190 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.991305 kubelet[1775]: E0317 17:49:41.991287 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.991524 kubelet[1775]: E0317 17:49:41.991451 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.991524 kubelet[1775]: W0317 17:49:41.991461 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.991524 kubelet[1775]: E0317 17:49:41.991484 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.991653 kubelet[1775]: E0317 17:49:41.991643 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.991724 kubelet[1775]: W0317 17:49:41.991714 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.991953 kubelet[1775]: E0317 17:49:41.991931 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.991996 kubelet[1775]: E0317 17:49:41.991939 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.991996 kubelet[1775]: W0317 17:49:41.991976 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.992083 kubelet[1775]: E0317 17:49:41.992052 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.993138 kubelet[1775]: E0317 17:49:41.993115 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.993138 kubelet[1775]: W0317 17:49:41.993133 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.993274 kubelet[1775]: E0317 17:49:41.993243 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.993380 kubelet[1775]: E0317 17:49:41.993352 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.993380 kubelet[1775]: W0317 17:49:41.993365 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.993433 kubelet[1775]: E0317 17:49:41.993425 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.993616 kubelet[1775]: E0317 17:49:41.993585 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.993616 kubelet[1775]: W0317 17:49:41.993600 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.993725 kubelet[1775]: E0317 17:49:41.993697 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.993809 kubelet[1775]: E0317 17:49:41.993788 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.993809 kubelet[1775]: W0317 17:49:41.993807 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.993901 kubelet[1775]: E0317 17:49:41.993835 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.993991 kubelet[1775]: E0317 17:49:41.993978 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.993991 kubelet[1775]: W0317 17:49:41.993989 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.994166 kubelet[1775]: E0317 17:49:41.994015 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.994205 kubelet[1775]: E0317 17:49:41.994188 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.994205 kubelet[1775]: W0317 17:49:41.994200 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.994271 kubelet[1775]: E0317 17:49:41.994256 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.994993 kubelet[1775]: E0317 17:49:41.994974 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.994993 kubelet[1775]: W0317 17:49:41.994988 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.995457 kubelet[1775]: E0317 17:49:41.995080 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.995457 kubelet[1775]: E0317 17:49:41.995224 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.995457 kubelet[1775]: W0317 17:49:41.995234 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.995457 kubelet[1775]: E0317 17:49:41.995294 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.995457 kubelet[1775]: E0317 17:49:41.995399 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.995457 kubelet[1775]: W0317 17:49:41.995407 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.995636 kubelet[1775]: E0317 17:49:41.995471 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.995689 kubelet[1775]: E0317 17:49:41.995672 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.995689 kubelet[1775]: W0317 17:49:41.995681 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.995743 kubelet[1775]: E0317 17:49:41.995721 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.995931 kubelet[1775]: E0317 17:49:41.995912 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.995931 kubelet[1775]: W0317 17:49:41.995929 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.996058 kubelet[1775]: E0317 17:49:41.996014 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.997067 kubelet[1775]: E0317 17:49:41.996117 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.997067 kubelet[1775]: W0317 17:49:41.996130 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.997067 kubelet[1775]: E0317 17:49:41.996208 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.997067 kubelet[1775]: E0317 17:49:41.996286 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.997067 kubelet[1775]: W0317 17:49:41.996293 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.997067 kubelet[1775]: E0317 17:49:41.996301 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:41.998461 kubelet[1775]: E0317 17:49:41.998440 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:41.998599 kubelet[1775]: W0317 17:49:41.998535 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:41.998599 kubelet[1775]: E0317 17:49:41.998557 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:42.007842 kubelet[1775]: E0317 17:49:42.007819 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:42.008062 kubelet[1775]: W0317 17:49:42.007908 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:42.008062 kubelet[1775]: E0317 17:49:42.007929 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:42.008419 kubelet[1775]: E0317 17:49:42.008402 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:42.008419 kubelet[1775]: W0317 17:49:42.008417 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:42.008496 kubelet[1775]: E0317 17:49:42.008430 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:42.171798 containerd[1466]: time="2025-03-17T17:49:42.171760306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v75l5,Uid:e513b76b-9d3a-4769-97aa-55d64a83435d,Namespace:kube-system,Attempt:0,}" Mar 17 17:49:42.175683 containerd[1466]: time="2025-03-17T17:49:42.175549921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8phd5,Uid:a97f4c47-4ea6-477a-b35e-5bc284aa2e88,Namespace:calico-system,Attempt:0,}" Mar 17 17:49:42.749034 containerd[1466]: time="2025-03-17T17:49:42.748902924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:49:42.750280 containerd[1466]: time="2025-03-17T17:49:42.750250906Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:49:42.750853 containerd[1466]: time="2025-03-17T17:49:42.750736059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:49:42.752306 containerd[1466]: time="2025-03-17T17:49:42.752249896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 17 17:49:42.754845 containerd[1466]: time="2025-03-17T17:49:42.754812146Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:49:42.757788 containerd[1466]: time="2025-03-17T17:49:42.757755687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:49:42.758795 containerd[1466]: time="2025-03-17T17:49:42.758739131Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 586.899353ms" Mar 17 17:49:42.761398 containerd[1466]: time="2025-03-17T17:49:42.761370005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 585.73816ms" Mar 17 17:49:42.853153 kubelet[1775]: E0317 17:49:42.853100 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:42.858100 containerd[1466]: time="2025-03-17T17:49:42.857732398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:42.858100 containerd[1466]: time="2025-03-17T17:49:42.857806527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:42.858100 containerd[1466]: time="2025-03-17T17:49:42.857821754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:42.858100 containerd[1466]: time="2025-03-17T17:49:42.857895401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:42.859199 containerd[1466]: time="2025-03-17T17:49:42.859070174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:42.859268 containerd[1466]: time="2025-03-17T17:49:42.859171865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:42.859268 containerd[1466]: time="2025-03-17T17:49:42.859209673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:42.859356 containerd[1466]: time="2025-03-17T17:49:42.859304012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:42.940200 systemd[1]: Started cri-containerd-672a6ced9fc3c328554e1c4fa6ec0c693f9aa552c5b6285220dac2fd1d25a830.scope - libcontainer container 672a6ced9fc3c328554e1c4fa6ec0c693f9aa552c5b6285220dac2fd1d25a830. Mar 17 17:49:42.941306 systemd[1]: Started cri-containerd-b1e2869db02b4a4311d4caf1cc3dc481f98a909b3490669dceeb38c26af452d8.scope - libcontainer container b1e2869db02b4a4311d4caf1cc3dc481f98a909b3490669dceeb38c26af452d8. Mar 17 17:49:42.959358 containerd[1466]: time="2025-03-17T17:49:42.959289039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v75l5,Uid:e513b76b-9d3a-4769-97aa-55d64a83435d,Namespace:kube-system,Attempt:0,} returns sandbox id \"672a6ced9fc3c328554e1c4fa6ec0c693f9aa552c5b6285220dac2fd1d25a830\"" Mar 17 17:49:42.961956 containerd[1466]: time="2025-03-17T17:49:42.961908021Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 17:49:42.963967 containerd[1466]: time="2025-03-17T17:49:42.963914645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8phd5,Uid:a97f4c47-4ea6-477a-b35e-5bc284aa2e88,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1e2869db02b4a4311d4caf1cc3dc481f98a909b3490669dceeb38c26af452d8\"" Mar 17 17:49:42.995912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265764944.mount: Deactivated successfully. Mar 17 17:49:43.853570 kubelet[1775]: E0317 17:49:43.853443 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:43.869143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1317810528.mount: Deactivated successfully. Mar 17 17:49:43.983468 kubelet[1775]: E0317 17:49:43.983428 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zrt5z" podUID="e545164e-181a-4d15-92e4-f279fff55ecc" Mar 17 17:49:44.090300 containerd[1466]: time="2025-03-17T17:49:44.090254165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:44.090724 containerd[1466]: time="2025-03-17T17:49:44.090697473Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871917" Mar 17 17:49:44.091448 containerd[1466]: time="2025-03-17T17:49:44.091423984Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:44.093217 containerd[1466]: time="2025-03-17T17:49:44.093185776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:44.094002 containerd[1466]: time="2025-03-17T17:49:44.093972411Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.132030327s" Mar 17 17:49:44.094002 containerd[1466]: time="2025-03-17T17:49:44.094002955Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 17 17:49:44.095162 containerd[1466]: time="2025-03-17T17:49:44.095122522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:49:44.096350 containerd[1466]: time="2025-03-17T17:49:44.096318550Z" level=info msg="CreateContainer within sandbox \"672a6ced9fc3c328554e1c4fa6ec0c693f9aa552c5b6285220dac2fd1d25a830\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:49:44.110739 containerd[1466]: time="2025-03-17T17:49:44.110622837Z" level=info msg="CreateContainer within sandbox \"672a6ced9fc3c328554e1c4fa6ec0c693f9aa552c5b6285220dac2fd1d25a830\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2270643fefbe41a26a59eb4f6dea69f356ed97893e2977a8c8dfbdcc1645bb88\"" Mar 17 17:49:44.111628 containerd[1466]: time="2025-03-17T17:49:44.111597231Z" level=info msg="StartContainer for \"2270643fefbe41a26a59eb4f6dea69f356ed97893e2977a8c8dfbdcc1645bb88\"" Mar 17 17:49:44.144248 systemd[1]: Started cri-containerd-2270643fefbe41a26a59eb4f6dea69f356ed97893e2977a8c8dfbdcc1645bb88.scope - libcontainer container 2270643fefbe41a26a59eb4f6dea69f356ed97893e2977a8c8dfbdcc1645bb88. Mar 17 17:49:44.169382 containerd[1466]: time="2025-03-17T17:49:44.169340809Z" level=info msg="StartContainer for \"2270643fefbe41a26a59eb4f6dea69f356ed97893e2977a8c8dfbdcc1645bb88\" returns successfully" Mar 17 17:49:44.854426 kubelet[1775]: E0317 17:49:44.854379 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:45.004526 kubelet[1775]: I0317 17:49:45.004436 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v75l5" podStartSLOduration=3.8707204539999998 podStartE2EDuration="5.004418566s" podCreationTimestamp="2025-03-17 17:49:40 +0000 UTC" firstStartedPulling="2025-03-17 17:49:42.961285458 +0000 UTC m=+4.100553345" lastFinishedPulling="2025-03-17 17:49:44.09498357 +0000 UTC m=+5.234251457" observedRunningTime="2025-03-17 17:49:45.003832662 +0000 UTC m=+6.143100549" watchObservedRunningTime="2025-03-17 17:49:45.004418566 +0000 UTC m=+6.143686413" Mar 17 17:49:45.092907 kubelet[1775]: E0317 17:49:45.092880 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.092907 kubelet[1775]: W0317 17:49:45.092901 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.093079 kubelet[1775]: E0317 17:49:45.092917 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.093120 kubelet[1775]: E0317 17:49:45.093109 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.093120 kubelet[1775]: W0317 17:49:45.093118 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.093185 kubelet[1775]: E0317 17:49:45.093127 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.093307 kubelet[1775]: E0317 17:49:45.093298 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.093307 kubelet[1775]: W0317 17:49:45.093307 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.093365 kubelet[1775]: E0317 17:49:45.093315 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.093470 kubelet[1775]: E0317 17:49:45.093460 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.093507 kubelet[1775]: W0317 17:49:45.093473 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.093507 kubelet[1775]: E0317 17:49:45.093482 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.093639 kubelet[1775]: E0317 17:49:45.093629 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.093639 kubelet[1775]: W0317 17:49:45.093639 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.093692 kubelet[1775]: E0317 17:49:45.093647 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.093778 kubelet[1775]: E0317 17:49:45.093768 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.093808 kubelet[1775]: W0317 17:49:45.093781 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.093808 kubelet[1775]: E0317 17:49:45.093789 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.093931 kubelet[1775]: E0317 17:49:45.093917 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.093931 kubelet[1775]: W0317 17:49:45.093931 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.093984 kubelet[1775]: E0317 17:49:45.093939 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.094089 kubelet[1775]: E0317 17:49:45.094080 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.094089 kubelet[1775]: W0317 17:49:45.094089 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.094135 kubelet[1775]: E0317 17:49:45.094098 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.094252 kubelet[1775]: E0317 17:49:45.094237 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.094252 kubelet[1775]: W0317 17:49:45.094251 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.094311 kubelet[1775]: E0317 17:49:45.094259 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.094387 kubelet[1775]: E0317 17:49:45.094378 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.094416 kubelet[1775]: W0317 17:49:45.094390 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.094416 kubelet[1775]: E0317 17:49:45.094397 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.094522 kubelet[1775]: E0317 17:49:45.094513 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.094558 kubelet[1775]: W0317 17:49:45.094524 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.094558 kubelet[1775]: E0317 17:49:45.094532 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.094669 kubelet[1775]: E0317 17:49:45.094657 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.094669 kubelet[1775]: W0317 17:49:45.094669 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.094729 kubelet[1775]: E0317 17:49:45.094677 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.094821 kubelet[1775]: E0317 17:49:45.094808 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.094821 kubelet[1775]: W0317 17:49:45.094821 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.094930 kubelet[1775]: E0317 17:49:45.094828 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.094983 kubelet[1775]: E0317 17:49:45.094967 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.094983 kubelet[1775]: W0317 17:49:45.094981 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.095039 kubelet[1775]: E0317 17:49:45.094988 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.095133 kubelet[1775]: E0317 17:49:45.095124 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.095133 kubelet[1775]: W0317 17:49:45.095133 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.095197 kubelet[1775]: E0317 17:49:45.095141 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.095277 kubelet[1775]: E0317 17:49:45.095269 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.095313 kubelet[1775]: W0317 17:49:45.095279 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.095313 kubelet[1775]: E0317 17:49:45.095287 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.095440 kubelet[1775]: E0317 17:49:45.095430 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.095466 kubelet[1775]: W0317 17:49:45.095440 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.095466 kubelet[1775]: E0317 17:49:45.095448 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.095582 kubelet[1775]: E0317 17:49:45.095570 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.095582 kubelet[1775]: W0317 17:49:45.095581 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.095635 kubelet[1775]: E0317 17:49:45.095588 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.095728 kubelet[1775]: E0317 17:49:45.095720 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.095728 kubelet[1775]: W0317 17:49:45.095728 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.095803 kubelet[1775]: E0317 17:49:45.095735 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.095891 kubelet[1775]: E0317 17:49:45.095881 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.095891 kubelet[1775]: W0317 17:49:45.095891 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.095973 kubelet[1775]: E0317 17:49:45.095900 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.101415 kubelet[1775]: E0317 17:49:45.101300 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.101415 kubelet[1775]: W0317 17:49:45.101315 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.101415 kubelet[1775]: E0317 17:49:45.101327 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.101699 kubelet[1775]: E0317 17:49:45.101652 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.101699 kubelet[1775]: W0317 17:49:45.101663 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.101699 kubelet[1775]: E0317 17:49:45.101679 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.101932 kubelet[1775]: E0317 17:49:45.101916 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.101978 kubelet[1775]: W0317 17:49:45.101929 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.101978 kubelet[1775]: E0317 17:49:45.101957 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.102190 kubelet[1775]: E0317 17:49:45.102179 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.102190 kubelet[1775]: W0317 17:49:45.102189 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.102269 kubelet[1775]: E0317 17:49:45.102203 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.102430 kubelet[1775]: E0317 17:49:45.102418 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.102430 kubelet[1775]: W0317 17:49:45.102429 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.102493 kubelet[1775]: E0317 17:49:45.102441 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.102625 kubelet[1775]: E0317 17:49:45.102614 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.102625 kubelet[1775]: W0317 17:49:45.102624 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.102710 kubelet[1775]: E0317 17:49:45.102633 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.102972 kubelet[1775]: E0317 17:49:45.102958 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.102972 kubelet[1775]: W0317 17:49:45.102970 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.103118 kubelet[1775]: E0317 17:49:45.103039 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.103143 kubelet[1775]: E0317 17:49:45.103134 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.103143 kubelet[1775]: W0317 17:49:45.103141 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.103209 kubelet[1775]: E0317 17:49:45.103149 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.103299 kubelet[1775]: E0317 17:49:45.103288 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.103299 kubelet[1775]: W0317 17:49:45.103298 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.103358 kubelet[1775]: E0317 17:49:45.103305 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.103432 kubelet[1775]: E0317 17:49:45.103422 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.103432 kubelet[1775]: W0317 17:49:45.103431 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.103498 kubelet[1775]: E0317 17:49:45.103438 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.103590 kubelet[1775]: E0317 17:49:45.103578 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.103590 kubelet[1775]: W0317 17:49:45.103588 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.103640 kubelet[1775]: E0317 17:49:45.103595 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.103940 kubelet[1775]: E0317 17:49:45.103925 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:45.103940 kubelet[1775]: W0317 17:49:45.103937 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:45.104012 kubelet[1775]: E0317 17:49:45.103947 1775 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:45.212659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997702166.mount: Deactivated successfully. Mar 17 17:49:45.277665 containerd[1466]: time="2025-03-17T17:49:45.277615362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:45.278095 containerd[1466]: time="2025-03-17T17:49:45.278051420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6490047" Mar 17 17:49:45.278812 containerd[1466]: time="2025-03-17T17:49:45.278782958Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:45.280664 containerd[1466]: time="2025-03-17T17:49:45.280607309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:45.281409 containerd[1466]: time="2025-03-17T17:49:45.281366489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.186211943s" Mar 17 17:49:45.281409 containerd[1466]: time="2025-03-17T17:49:45.281410380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 17 17:49:45.283342 containerd[1466]: time="2025-03-17T17:49:45.283310878Z" level=info msg="CreateContainer within sandbox \"b1e2869db02b4a4311d4caf1cc3dc481f98a909b3490669dceeb38c26af452d8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:49:45.294998 containerd[1466]: time="2025-03-17T17:49:45.294908725Z" level=info msg="CreateContainer within sandbox \"b1e2869db02b4a4311d4caf1cc3dc481f98a909b3490669dceeb38c26af452d8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7b9d84620327d167082870d84300524c45125ed1eb3e6c000e7c6ac1f0a725c1\"" Mar 17 17:49:45.295611 containerd[1466]: time="2025-03-17T17:49:45.295551398Z" level=info msg="StartContainer for \"7b9d84620327d167082870d84300524c45125ed1eb3e6c000e7c6ac1f0a725c1\"" Mar 17 17:49:45.326247 systemd[1]: Started cri-containerd-7b9d84620327d167082870d84300524c45125ed1eb3e6c000e7c6ac1f0a725c1.scope - libcontainer container 7b9d84620327d167082870d84300524c45125ed1eb3e6c000e7c6ac1f0a725c1. Mar 17 17:49:45.354628 containerd[1466]: time="2025-03-17T17:49:45.354579005Z" level=info msg="StartContainer for \"7b9d84620327d167082870d84300524c45125ed1eb3e6c000e7c6ac1f0a725c1\" returns successfully" Mar 17 17:49:45.371757 systemd[1]: cri-containerd-7b9d84620327d167082870d84300524c45125ed1eb3e6c000e7c6ac1f0a725c1.scope: Deactivated successfully. Mar 17 17:49:45.527156 containerd[1466]: time="2025-03-17T17:49:45.526997822Z" level=info msg="shim disconnected" id=7b9d84620327d167082870d84300524c45125ed1eb3e6c000e7c6ac1f0a725c1 namespace=k8s.io Mar 17 17:49:45.527156 containerd[1466]: time="2025-03-17T17:49:45.527068833Z" level=warning msg="cleaning up after shim disconnected" id=7b9d84620327d167082870d84300524c45125ed1eb3e6c000e7c6ac1f0a725c1 namespace=k8s.io Mar 17 17:49:45.527156 containerd[1466]: time="2025-03-17T17:49:45.527077619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:45.855424 kubelet[1775]: E0317 17:49:45.855294 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:45.982749 kubelet[1775]: E0317 17:49:45.982596 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zrt5z" podUID="e545164e-181a-4d15-92e4-f279fff55ecc" Mar 17 17:49:45.999806 containerd[1466]: time="2025-03-17T17:49:45.999595761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:49:46.194183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b9d84620327d167082870d84300524c45125ed1eb3e6c000e7c6ac1f0a725c1-rootfs.mount: Deactivated successfully. Mar 17 17:49:46.855655 kubelet[1775]: E0317 17:49:46.855612 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:47.856181 kubelet[1775]: E0317 17:49:47.856149 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:47.984440 kubelet[1775]: E0317 17:49:47.983369 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zrt5z" podUID="e545164e-181a-4d15-92e4-f279fff55ecc" Mar 17 17:49:48.071111 containerd[1466]: time="2025-03-17T17:49:48.071071858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:48.071741 containerd[1466]: time="2025-03-17T17:49:48.071615262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 17 17:49:48.072539 containerd[1466]: time="2025-03-17T17:49:48.072506121Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:48.074887 containerd[1466]: time="2025-03-17T17:49:48.074841503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:48.075553 containerd[1466]: time="2025-03-17T17:49:48.075527392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 2.075894287s" Mar 17 17:49:48.075625 containerd[1466]: time="2025-03-17T17:49:48.075553965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 17 17:49:48.077487 containerd[1466]: time="2025-03-17T17:49:48.077453077Z" level=info msg="CreateContainer within sandbox \"b1e2869db02b4a4311d4caf1cc3dc481f98a909b3490669dceeb38c26af452d8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:49:48.091274 containerd[1466]: time="2025-03-17T17:49:48.091237876Z" level=info msg="CreateContainer within sandbox \"b1e2869db02b4a4311d4caf1cc3dc481f98a909b3490669dceeb38c26af452d8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ccf05e098e57095429772416080a8f84993e4c9f77d34bef29f64ab215b4f9f3\"" Mar 17 17:49:48.091822 containerd[1466]: time="2025-03-17T17:49:48.091797994Z" level=info msg="StartContainer for \"ccf05e098e57095429772416080a8f84993e4c9f77d34bef29f64ab215b4f9f3\"" Mar 17 17:49:48.124204 systemd[1]: Started cri-containerd-ccf05e098e57095429772416080a8f84993e4c9f77d34bef29f64ab215b4f9f3.scope - libcontainer container ccf05e098e57095429772416080a8f84993e4c9f77d34bef29f64ab215b4f9f3. Mar 17 17:49:48.153032 containerd[1466]: time="2025-03-17T17:49:48.152892079Z" level=info msg="StartContainer for \"ccf05e098e57095429772416080a8f84993e4c9f77d34bef29f64ab215b4f9f3\" returns successfully" Mar 17 17:49:48.635890 containerd[1466]: time="2025-03-17T17:49:48.635828985Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:49:48.637808 systemd[1]: cri-containerd-ccf05e098e57095429772416080a8f84993e4c9f77d34bef29f64ab215b4f9f3.scope: Deactivated successfully. Mar 17 17:49:48.638095 systemd[1]: cri-containerd-ccf05e098e57095429772416080a8f84993e4c9f77d34bef29f64ab215b4f9f3.scope: Consumed 429ms CPU time, 168.2M memory peak, 150.3M written to disk. Mar 17 17:49:48.654870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccf05e098e57095429772416080a8f84993e4c9f77d34bef29f64ab215b4f9f3-rootfs.mount: Deactivated successfully. Mar 17 17:49:48.675599 kubelet[1775]: I0317 17:49:48.675545 1775 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 17:49:48.857631 kubelet[1775]: E0317 17:49:48.857590 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:49.001547 containerd[1466]: time="2025-03-17T17:49:49.001389998Z" level=info msg="shim disconnected" id=ccf05e098e57095429772416080a8f84993e4c9f77d34bef29f64ab215b4f9f3 namespace=k8s.io Mar 17 17:49:49.001547 containerd[1466]: time="2025-03-17T17:49:49.001442209Z" level=warning msg="cleaning up after shim disconnected" id=ccf05e098e57095429772416080a8f84993e4c9f77d34bef29f64ab215b4f9f3 namespace=k8s.io Mar 17 17:49:49.001547 containerd[1466]: time="2025-03-17T17:49:49.001459279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:49.857910 kubelet[1775]: E0317 17:49:49.857863 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:49.987465 systemd[1]: Created slice kubepods-besteffort-pode545164e_181a_4d15_92e4_f279fff55ecc.slice - libcontainer container kubepods-besteffort-pode545164e_181a_4d15_92e4_f279fff55ecc.slice. Mar 17 17:49:49.989360 containerd[1466]: time="2025-03-17T17:49:49.989317178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrt5z,Uid:e545164e-181a-4d15-92e4-f279fff55ecc,Namespace:calico-system,Attempt:0,}" Mar 17 17:49:50.018663 containerd[1466]: time="2025-03-17T17:49:50.018456491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:49:50.122804 containerd[1466]: time="2025-03-17T17:49:50.122674016Z" level=error msg="Failed to destroy network for sandbox \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:50.124276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439-shm.mount: Deactivated successfully. Mar 17 17:49:50.124681 containerd[1466]: time="2025-03-17T17:49:50.124382509Z" level=error msg="encountered an error cleaning up failed sandbox \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:50.124681 containerd[1466]: time="2025-03-17T17:49:50.124454058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrt5z,Uid:e545164e-181a-4d15-92e4-f279fff55ecc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:50.124734 kubelet[1775]: E0317 17:49:50.124675 1775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:50.124767 kubelet[1775]: E0317 17:49:50.124743 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:50.124795 kubelet[1775]: E0317 17:49:50.124764 1775 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:50.125270 kubelet[1775]: E0317 17:49:50.124803 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zrt5z_calico-system(e545164e-181a-4d15-92e4-f279fff55ecc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zrt5z_calico-system(e545164e-181a-4d15-92e4-f279fff55ecc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zrt5z" podUID="e545164e-181a-4d15-92e4-f279fff55ecc" Mar 17 17:49:50.858871 kubelet[1775]: E0317 17:49:50.858816 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:51.015793 kubelet[1775]: I0317 17:49:51.015763 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439" Mar 17 17:49:51.016629 containerd[1466]: time="2025-03-17T17:49:51.016597048Z" level=info msg="StopPodSandbox for \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\"" Mar 17 17:49:51.016951 containerd[1466]: time="2025-03-17T17:49:51.016766074Z" level=info msg="Ensure that sandbox 04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439 in task-service has been cleanup successfully" Mar 17 17:49:51.016951 containerd[1466]: time="2025-03-17T17:49:51.016944273Z" level=info msg="TearDown network for sandbox \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\" successfully" Mar 17 17:49:51.017001 containerd[1466]: time="2025-03-17T17:49:51.016960495Z" level=info msg="StopPodSandbox for \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\" returns successfully" Mar 17 17:49:51.018640 containerd[1466]: time="2025-03-17T17:49:51.018256829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrt5z,Uid:e545164e-181a-4d15-92e4-f279fff55ecc,Namespace:calico-system,Attempt:1,}" Mar 17 17:49:51.018548 systemd[1]: run-netns-cni\x2d7e612860\x2d9a66\x2da476\x2db16e\x2de80f2d652b32.mount: Deactivated successfully. Mar 17 17:49:51.074698 containerd[1466]: time="2025-03-17T17:49:51.074657789Z" level=error msg="Failed to destroy network for sandbox \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:51.075621 containerd[1466]: time="2025-03-17T17:49:51.075592320Z" level=error msg="encountered an error cleaning up failed sandbox \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:51.075912 containerd[1466]: time="2025-03-17T17:49:51.075887715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrt5z,Uid:e545164e-181a-4d15-92e4-f279fff55ecc,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:51.077875 kubelet[1775]: E0317 17:49:51.077741 1775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:51.077875 kubelet[1775]: E0317 17:49:51.077800 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:51.077875 kubelet[1775]: E0317 17:49:51.077818 1775 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:51.078015 kubelet[1775]: E0317 17:49:51.077871 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zrt5z_calico-system(e545164e-181a-4d15-92e4-f279fff55ecc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zrt5z_calico-system(e545164e-181a-4d15-92e4-f279fff55ecc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zrt5z" podUID="e545164e-181a-4d15-92e4-f279fff55ecc" Mar 17 17:49:51.081401 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694-shm.mount: Deactivated successfully. Mar 17 17:49:51.724961 systemd[1]: Created slice kubepods-besteffort-pode6f5fd80_8c58_46d8_a963_ff380eae673d.slice - libcontainer container kubepods-besteffort-pode6f5fd80_8c58_46d8_a963_ff380eae673d.slice. Mar 17 17:49:51.848961 kubelet[1775]: I0317 17:49:51.848918 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k5b9\" (UniqueName: \"kubernetes.io/projected/e6f5fd80-8c58-46d8-a963-ff380eae673d-kube-api-access-8k5b9\") pod \"nginx-deployment-8587fbcb89-jx8hc\" (UID: \"e6f5fd80-8c58-46d8-a963-ff380eae673d\") " pod="default/nginx-deployment-8587fbcb89-jx8hc" Mar 17 17:49:51.859340 kubelet[1775]: E0317 17:49:51.859113 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:52.021864 kubelet[1775]: I0317 17:49:52.021500 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694" Mar 17 17:49:52.022491 containerd[1466]: time="2025-03-17T17:49:52.022131722Z" level=info msg="StopPodSandbox for \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\"" Mar 17 17:49:52.022491 containerd[1466]: time="2025-03-17T17:49:52.022282018Z" level=info msg="Ensure that sandbox 905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694 in task-service has been cleanup successfully" Mar 17 17:49:52.022914 containerd[1466]: time="2025-03-17T17:49:52.022822171Z" level=info msg="TearDown network for sandbox \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\" successfully" Mar 17 17:49:52.022914 containerd[1466]: time="2025-03-17T17:49:52.022841513Z" level=info msg="StopPodSandbox for \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\" returns successfully" Mar 17 17:49:52.023766 containerd[1466]: time="2025-03-17T17:49:52.023431885Z" level=info msg="StopPodSandbox for \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\"" Mar 17 17:49:52.023766 containerd[1466]: time="2025-03-17T17:49:52.023502648Z" level=info msg="TearDown network for sandbox \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\" successfully" Mar 17 17:49:52.023766 containerd[1466]: time="2025-03-17T17:49:52.023512139Z" level=info msg="StopPodSandbox for \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\" returns successfully" Mar 17 17:49:52.023745 systemd[1]: run-netns-cni\x2d5fee5aaf\x2dcc21\x2d6864\x2d51b1\x2d5015f0b6386f.mount: Deactivated successfully. Mar 17 17:49:52.024866 containerd[1466]: time="2025-03-17T17:49:52.024275993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrt5z,Uid:e545164e-181a-4d15-92e4-f279fff55ecc,Namespace:calico-system,Attempt:2,}" Mar 17 17:49:52.027937 containerd[1466]: time="2025-03-17T17:49:52.027842090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-jx8hc,Uid:e6f5fd80-8c58-46d8-a963-ff380eae673d,Namespace:default,Attempt:0,}" Mar 17 17:49:52.098045 containerd[1466]: time="2025-03-17T17:49:52.097959090Z" level=error msg="Failed to destroy network for sandbox \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:52.099043 containerd[1466]: time="2025-03-17T17:49:52.098344862Z" level=error msg="encountered an error cleaning up failed sandbox \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:52.099043 containerd[1466]: time="2025-03-17T17:49:52.098404852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrt5z,Uid:e545164e-181a-4d15-92e4-f279fff55ecc,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:52.099146 kubelet[1775]: E0317 17:49:52.098608 1775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:52.099146 kubelet[1775]: E0317 17:49:52.098661 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:52.099146 kubelet[1775]: E0317 17:49:52.098678 1775 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:52.099248 kubelet[1775]: E0317 17:49:52.098717 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zrt5z_calico-system(e545164e-181a-4d15-92e4-f279fff55ecc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zrt5z_calico-system(e545164e-181a-4d15-92e4-f279fff55ecc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zrt5z" podUID="e545164e-181a-4d15-92e4-f279fff55ecc" Mar 17 17:49:52.105784 containerd[1466]: time="2025-03-17T17:49:52.105744849Z" level=error msg="Failed to destroy network for sandbox \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:52.106464 containerd[1466]: time="2025-03-17T17:49:52.106144477Z" level=error msg="encountered an error cleaning up failed sandbox \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:52.106464 containerd[1466]: time="2025-03-17T17:49:52.106212356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-jx8hc,Uid:e6f5fd80-8c58-46d8-a963-ff380eae673d,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:52.106577 kubelet[1775]: E0317 17:49:52.106446 1775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:52.106577 kubelet[1775]: E0317 17:49:52.106502 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-jx8hc" Mar 17 17:49:52.106577 kubelet[1775]: E0317 17:49:52.106523 1775 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-jx8hc" Mar 17 17:49:52.106657 kubelet[1775]: E0317 17:49:52.106557 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-jx8hc_default(e6f5fd80-8c58-46d8-a963-ff380eae673d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-jx8hc_default(e6f5fd80-8c58-46d8-a963-ff380eae673d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-jx8hc" podUID="e6f5fd80-8c58-46d8-a963-ff380eae673d" Mar 17 17:49:52.859464 kubelet[1775]: E0317 17:49:52.859389 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:53.018791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e-shm.mount: Deactivated successfully. Mar 17 17:49:53.018878 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294-shm.mount: Deactivated successfully. Mar 17 17:49:53.018928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845551066.mount: Deactivated successfully. Mar 17 17:49:53.024290 kubelet[1775]: I0317 17:49:53.024255 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294" Mar 17 17:49:53.024907 containerd[1466]: time="2025-03-17T17:49:53.024746581Z" level=info msg="StopPodSandbox for \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\"" Mar 17 17:49:53.025395 containerd[1466]: time="2025-03-17T17:49:53.024914834Z" level=info msg="Ensure that sandbox 62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294 in task-service has been cleanup successfully" Mar 17 17:49:53.025395 containerd[1466]: time="2025-03-17T17:49:53.025103107Z" level=info msg="TearDown network for sandbox \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\" successfully" Mar 17 17:49:53.025395 containerd[1466]: time="2025-03-17T17:49:53.025120605Z" level=info msg="StopPodSandbox for \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\" returns successfully" Mar 17 17:49:53.025882 containerd[1466]: time="2025-03-17T17:49:53.025489222Z" level=info msg="StopPodSandbox for \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\"" Mar 17 17:49:53.025882 containerd[1466]: time="2025-03-17T17:49:53.025563378Z" level=info msg="TearDown network for sandbox \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\" successfully" Mar 17 17:49:53.025882 containerd[1466]: time="2025-03-17T17:49:53.025572908Z" level=info msg="StopPodSandbox for \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\" returns successfully" Mar 17 17:49:53.025987 containerd[1466]: time="2025-03-17T17:49:53.025938603Z" level=info msg="StopPodSandbox for \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\"" Mar 17 17:49:53.026050 containerd[1466]: time="2025-03-17T17:49:53.026033941Z" level=info msg="TearDown network for sandbox \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\" successfully" Mar 17 17:49:53.026050 containerd[1466]: time="2025-03-17T17:49:53.026047955Z" level=info msg="StopPodSandbox for \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\" returns successfully" Mar 17 17:49:53.026667 systemd[1]: run-netns-cni\x2dc4450ff5\x2d7db7\x2d3809\x2d1c09\x2d92f930b852a8.mount: Deactivated successfully. Mar 17 17:49:53.027194 kubelet[1775]: I0317 17:49:53.027163 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e" Mar 17 17:49:53.027675 containerd[1466]: time="2025-03-17T17:49:53.027642710Z" level=info msg="StopPodSandbox for \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\"" Mar 17 17:49:53.027780 containerd[1466]: time="2025-03-17T17:49:53.027755866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrt5z,Uid:e545164e-181a-4d15-92e4-f279fff55ecc,Namespace:calico-system,Attempt:3,}" Mar 17 17:49:53.027816 containerd[1466]: time="2025-03-17T17:49:53.027785896Z" level=info msg="Ensure that sandbox 844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e in task-service has been cleanup successfully" Mar 17 17:49:53.029094 containerd[1466]: time="2025-03-17T17:49:53.029061163Z" level=info msg="TearDown network for sandbox \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\" successfully" Mar 17 17:49:53.029094 containerd[1466]: time="2025-03-17T17:49:53.029088712Z" level=info msg="StopPodSandbox for \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\" returns successfully" Mar 17 17:49:53.029207 systemd[1]: run-netns-cni\x2d25626d41\x2df253\x2dbe72\x2d9c1d\x2d72b10d620a82.mount: Deactivated successfully. Mar 17 17:49:53.029991 containerd[1466]: time="2025-03-17T17:49:53.029957842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-jx8hc,Uid:e6f5fd80-8c58-46d8-a963-ff380eae673d,Namespace:default,Attempt:1,}" Mar 17 17:49:53.217964 containerd[1466]: time="2025-03-17T17:49:53.217908320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:53.218519 containerd[1466]: time="2025-03-17T17:49:53.218481668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 17 17:49:53.219385 containerd[1466]: time="2025-03-17T17:49:53.219352000Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:53.224916 containerd[1466]: time="2025-03-17T17:49:53.224877543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:53.225999 containerd[1466]: time="2025-03-17T17:49:53.225301137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 3.206805151s" Mar 17 17:49:53.225999 containerd[1466]: time="2025-03-17T17:49:53.225328485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 17 17:49:53.232612 containerd[1466]: time="2025-03-17T17:49:53.232466361Z" level=info msg="CreateContainer within sandbox \"b1e2869db02b4a4311d4caf1cc3dc481f98a909b3490669dceeb38c26af452d8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:49:53.248322 containerd[1466]: time="2025-03-17T17:49:53.248204652Z" level=info msg="CreateContainer within sandbox \"b1e2869db02b4a4311d4caf1cc3dc481f98a909b3490669dceeb38c26af452d8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"26417d097cc5df84e78029f2293bf155f98730b60adcccb20f0c9fc0baafe1d1\"" Mar 17 17:49:53.250944 containerd[1466]: time="2025-03-17T17:49:53.250175952Z" level=info msg="StartContainer for \"26417d097cc5df84e78029f2293bf155f98730b60adcccb20f0c9fc0baafe1d1\"" Mar 17 17:49:53.274177 systemd[1]: Started cri-containerd-26417d097cc5df84e78029f2293bf155f98730b60adcccb20f0c9fc0baafe1d1.scope - libcontainer container 26417d097cc5df84e78029f2293bf155f98730b60adcccb20f0c9fc0baafe1d1. Mar 17 17:49:53.283638 containerd[1466]: time="2025-03-17T17:49:53.283584995Z" level=error msg="Failed to destroy network for sandbox \"4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:53.283925 containerd[1466]: time="2025-03-17T17:49:53.283893190Z" level=error msg="encountered an error cleaning up failed sandbox \"4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:53.283991 containerd[1466]: time="2025-03-17T17:49:53.283966586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrt5z,Uid:e545164e-181a-4d15-92e4-f279fff55ecc,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:53.284195 kubelet[1775]: E0317 17:49:53.284158 1775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:53.284257 kubelet[1775]: E0317 17:49:53.284214 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:53.284257 kubelet[1775]: E0317 17:49:53.284235 1775 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zrt5z" Mar 17 17:49:53.284308 kubelet[1775]: E0317 17:49:53.284275 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zrt5z_calico-system(e545164e-181a-4d15-92e4-f279fff55ecc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zrt5z_calico-system(e545164e-181a-4d15-92e4-f279fff55ecc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zrt5z" podUID="e545164e-181a-4d15-92e4-f279fff55ecc" Mar 17 17:49:53.286050 containerd[1466]: time="2025-03-17T17:49:53.285982812Z" level=error msg="Failed to destroy network for sandbox \"f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:53.286313 containerd[1466]: time="2025-03-17T17:49:53.286278035Z" level=error msg="encountered an error cleaning up failed sandbox \"f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:53.286359 containerd[1466]: time="2025-03-17T17:49:53.286341179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-jx8hc,Uid:e6f5fd80-8c58-46d8-a963-ff380eae673d,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:53.286578 kubelet[1775]: E0317 17:49:53.286528 1775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:53.286614 kubelet[1775]: E0317 17:49:53.286586 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-jx8hc" Mar 17 17:49:53.286614 kubelet[1775]: E0317 17:49:53.286603 1775 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-jx8hc" Mar 17 17:49:53.286657 kubelet[1775]: E0317 17:49:53.286635 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-jx8hc_default(e6f5fd80-8c58-46d8-a963-ff380eae673d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-jx8hc_default(e6f5fd80-8c58-46d8-a963-ff380eae673d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-jx8hc" podUID="e6f5fd80-8c58-46d8-a963-ff380eae673d" Mar 17 17:49:53.313355 containerd[1466]: time="2025-03-17T17:49:53.313229698Z" level=info msg="StartContainer for \"26417d097cc5df84e78029f2293bf155f98730b60adcccb20f0c9fc0baafe1d1\" returns successfully" Mar 17 17:49:53.442747 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:49:53.442844 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:49:53.859542 kubelet[1775]: E0317 17:49:53.859478 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:54.019346 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43-shm.mount: Deactivated successfully. Mar 17 17:49:54.019439 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc-shm.mount: Deactivated successfully. Mar 17 17:49:54.030791 kubelet[1775]: I0317 17:49:54.030762 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc" Mar 17 17:49:54.031621 containerd[1466]: time="2025-03-17T17:49:54.031384467Z" level=info msg="StopPodSandbox for \"4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc\"" Mar 17 17:49:54.031621 containerd[1466]: time="2025-03-17T17:49:54.031538725Z" level=info msg="Ensure that sandbox 4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc in task-service has been cleanup successfully" Mar 17 17:49:54.031981 kubelet[1775]: I0317 17:49:54.031777 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43" Mar 17 17:49:54.032542 containerd[1466]: time="2025-03-17T17:49:54.032187227Z" level=info msg="StopPodSandbox for \"f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43\"" Mar 17 17:49:54.032542 containerd[1466]: time="2025-03-17T17:49:54.032328674Z" level=info msg="Ensure that sandbox f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43 in task-service has been cleanup successfully" Mar 17 17:49:54.032984 systemd[1]: run-netns-cni\x2d667dbd01\x2d44a9\x2d9000\x2d94f4\x2d0d749c7953f2.mount: Deactivated successfully. Mar 17 17:49:54.034238 containerd[1466]: time="2025-03-17T17:49:54.034117238Z" level=info msg="TearDown network for sandbox \"4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc\" successfully" Mar 17 17:49:54.034238 containerd[1466]: time="2025-03-17T17:49:54.034142581Z" level=info msg="StopPodSandbox for \"4f8028fc6748ac0a1b4c225c48bfd388c4678730beb522e79717064af7832bdc\" returns successfully" Mar 17 17:49:54.034403 containerd[1466]: time="2025-03-17T17:49:54.034376190Z" level=info msg="TearDown network for sandbox \"f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43\" successfully" Mar 17 17:49:54.034526 containerd[1466]: time="2025-03-17T17:49:54.034467792Z" level=info msg="StopPodSandbox for \"f46eb131ff8348b4b55ca3f82ab760b19d622ca0eab639d945fc7476d8179f43\" returns successfully" Mar 17 17:49:54.035040 containerd[1466]: time="2025-03-17T17:49:54.034735552Z" level=info msg="StopPodSandbox for \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\"" Mar 17 17:49:54.035040 containerd[1466]: time="2025-03-17T17:49:54.034808978Z" level=info msg="TearDown network for sandbox \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\" successfully" Mar 17 17:49:54.035040 containerd[1466]: time="2025-03-17T17:49:54.034818026Z" level=info msg="StopPodSandbox for \"844c3ffee862312e1185ed91b274e30858eec2da0742cfe07814327f43d42f9e\" returns successfully" Mar 17 17:49:54.035040 containerd[1466]: time="2025-03-17T17:49:54.034888610Z" level=info msg="StopPodSandbox for \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\"" Mar 17 17:49:54.035040 containerd[1466]: time="2025-03-17T17:49:54.034947783Z" level=info msg="TearDown network for sandbox \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\" successfully" Mar 17 17:49:54.035040 containerd[1466]: time="2025-03-17T17:49:54.034956871Z" level=info msg="StopPodSandbox for \"62bd2b88ce86365c0518f1ce04f0c0c859d573d75db4f4164f962937573ca294\" returns successfully" Mar 17 17:49:54.034849 systemd[1]: run-netns-cni\x2d5bde6fca\x2d5a3e\x2d68af\x2df0be\x2dfcea1e37a916.mount: Deactivated successfully. Mar 17 17:49:54.035945 containerd[1466]: time="2025-03-17T17:49:54.035684163Z" level=info msg="StopPodSandbox for \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\"" Mar 17 17:49:54.035945 containerd[1466]: time="2025-03-17T17:49:54.035753385Z" level=info msg="TearDown network for sandbox \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\" successfully" Mar 17 17:49:54.035945 containerd[1466]: time="2025-03-17T17:49:54.035762073Z" level=info msg="StopPodSandbox for \"905cb4da5b8b01caaef2ee704397fd6e77c4264eba0f3c493b2574fa376af694\" returns successfully" Mar 17 17:49:54.036150 containerd[1466]: time="2025-03-17T17:49:54.036129403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-jx8hc,Uid:e6f5fd80-8c58-46d8-a963-ff380eae673d,Namespace:default,Attempt:2,}" Mar 17 17:49:54.036731 containerd[1466]: time="2025-03-17T17:49:54.036593859Z" level=info msg="StopPodSandbox for \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\"" Mar 17 17:49:54.036731 containerd[1466]: time="2025-03-17T17:49:54.036667525Z" level=info msg="TearDown network for sandbox \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\" successfully" Mar 17 17:49:54.036731 containerd[1466]: time="2025-03-17T17:49:54.036675973Z" level=info msg="StopPodSandbox for \"04f157c603d8ee82222a24e00f1439bde44fbe003acf96a155d4330a57737439\" returns successfully" Mar 17 17:49:54.037244 containerd[1466]: time="2025-03-17T17:49:54.037078574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrt5z,Uid:e545164e-181a-4d15-92e4-f279fff55ecc,Namespace:calico-system,Attempt:4,}" Mar 17 17:49:54.056500 kubelet[1775]: I0317 17:49:54.056383 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8phd5" podStartSLOduration=3.795604805 podStartE2EDuration="14.056366114s" podCreationTimestamp="2025-03-17 17:49:40 +0000 UTC" firstStartedPulling="2025-03-17 17:49:42.965281068 +0000 UTC m=+4.104548955" lastFinishedPulling="2025-03-17 17:49:53.226042377 +0000 UTC m=+14.365310264" observedRunningTime="2025-03-17 17:49:54.056281798 +0000 UTC m=+15.195549725" watchObservedRunningTime="2025-03-17 17:49:54.056366114 +0000 UTC m=+15.195634001" Mar 17 17:49:54.226233 systemd-networkd[1407]: calic7488026d87: Link UP Mar 17 17:49:54.226856 systemd-networkd[1407]: calic7488026d87: Gained carrier Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.078 [INFO][2627] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.094 [INFO][2627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.117-k8s-csi--node--driver--zrt5z-eth0 csi-node-driver- calico-system e545164e-181a-4d15-92e4-f279fff55ecc 872 0 2025-03-17 17:49:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.117 csi-node-driver-zrt5z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic7488026d87 [] []}} ContainerID="8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" Namespace="calico-system" Pod="csi-node-driver-zrt5z" WorkloadEndpoint="10.0.0.117-k8s-csi--node--driver--zrt5z-" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.095 [INFO][2627] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" Namespace="calico-system" Pod="csi-node-driver-zrt5z" WorkloadEndpoint="10.0.0.117-k8s-csi--node--driver--zrt5z-eth0" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.172 [INFO][2647] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" HandleID="k8s-pod-network.8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" Workload="10.0.0.117-k8s-csi--node--driver--zrt5z-eth0" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.185 [INFO][2647] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" HandleID="k8s-pod-network.8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" Workload="10.0.0.117-k8s-csi--node--driver--zrt5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400047cff0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.117", "pod":"csi-node-driver-zrt5z", "timestamp":"2025-03-17 17:49:54.172741855 +0000 UTC"}, Hostname:"10.0.0.117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.185 [INFO][2647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.185 [INFO][2647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.185 [INFO][2647] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.117' Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.187 [INFO][2647] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" host="10.0.0.117" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.196 [INFO][2647] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.117" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.205 [INFO][2647] ipam/ipam.go 489: Trying affinity for 192.168.28.128/26 host="10.0.0.117" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.207 [INFO][2647] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.128/26 host="10.0.0.117" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.209 [INFO][2647] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="10.0.0.117" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.209 [INFO][2647] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" host="10.0.0.117" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.211 [INFO][2647] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.215 [INFO][2647] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" host="10.0.0.117" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.219 [INFO][2647] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.129/26] block=192.168.28.128/26 handle="k8s-pod-network.8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" host="10.0.0.117" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.219 [INFO][2647] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.129/26] handle="k8s-pod-network.8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" host="10.0.0.117" Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.219 [INFO][2647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:54.237949 containerd[1466]: 2025-03-17 17:49:54.219 [INFO][2647] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.129/26] IPv6=[] ContainerID="8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" HandleID="k8s-pod-network.8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" Workload="10.0.0.117-k8s-csi--node--driver--zrt5z-eth0" Mar 17 17:49:54.238470 containerd[1466]: 2025-03-17 17:49:54.221 [INFO][2627] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" Namespace="calico-system" Pod="csi-node-driver-zrt5z" WorkloadEndpoint="10.0.0.117-k8s-csi--node--driver--zrt5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.117-k8s-csi--node--driver--zrt5z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e545164e-181a-4d15-92e4-f279fff55ecc", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.117", ContainerID:"", Pod:"csi-node-driver-zrt5z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic7488026d87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:54.238470 containerd[1466]: 2025-03-17 17:49:54.221 [INFO][2627] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.129/32] ContainerID="8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" Namespace="calico-system" Pod="csi-node-driver-zrt5z" WorkloadEndpoint="10.0.0.117-k8s-csi--node--driver--zrt5z-eth0" Mar 17 17:49:54.238470 containerd[1466]: 2025-03-17 17:49:54.221 [INFO][2627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7488026d87 ContainerID="8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" Namespace="calico-system" Pod="csi-node-driver-zrt5z" WorkloadEndpoint="10.0.0.117-k8s-csi--node--driver--zrt5z-eth0" Mar 17 17:49:54.238470 containerd[1466]: 2025-03-17 17:49:54.226 [INFO][2627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" Namespace="calico-system" Pod="csi-node-driver-zrt5z" WorkloadEndpoint="10.0.0.117-k8s-csi--node--driver--zrt5z-eth0" Mar 17 17:49:54.238470 containerd[1466]: 2025-03-17 17:49:54.226 [INFO][2627] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" Namespace="calico-system" Pod="csi-node-driver-zrt5z" WorkloadEndpoint="10.0.0.117-k8s-csi--node--driver--zrt5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.117-k8s-csi--node--driver--zrt5z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e545164e-181a-4d15-92e4-f279fff55ecc", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.117", ContainerID:"8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed", Pod:"csi-node-driver-zrt5z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic7488026d87", MAC:"36:ca:66:e2:89:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:54.238470 containerd[1466]: 2025-03-17 17:49:54.236 [INFO][2627] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed" Namespace="calico-system" Pod="csi-node-driver-zrt5z" WorkloadEndpoint="10.0.0.117-k8s-csi--node--driver--zrt5z-eth0" Mar 17 17:49:54.253302 containerd[1466]: time="2025-03-17T17:49:54.253212112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:54.253302 containerd[1466]: time="2025-03-17T17:49:54.253291903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:54.253472 containerd[1466]: time="2025-03-17T17:49:54.253305996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:54.253472 containerd[1466]: time="2025-03-17T17:49:54.253380183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:54.274215 systemd[1]: Started cri-containerd-8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed.scope - libcontainer container 8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed. Mar 17 17:49:54.283089 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:49:54.291425 containerd[1466]: time="2025-03-17T17:49:54.291361089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zrt5z,Uid:e545164e-181a-4d15-92e4-f279fff55ecc,Namespace:calico-system,Attempt:4,} returns sandbox id \"8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed\"" Mar 17 17:49:54.292906 containerd[1466]: time="2025-03-17T17:49:54.292874326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:49:54.333300 systemd-networkd[1407]: calid5e066da159: Link UP Mar 17 17:49:54.333917 systemd-networkd[1407]: calid5e066da159: Gained carrier Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.073 [INFO][2616] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.091 [INFO][2616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0 nginx-deployment-8587fbcb89- default e6f5fd80-8c58-46d8-a963-ff380eae673d 954 0 2025-03-17 17:49:51 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.117 nginx-deployment-8587fbcb89-jx8hc eth0 default [] [] [kns.default ksa.default.default] calid5e066da159 [] []}} ContainerID="25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" Namespace="default" Pod="nginx-deployment-8587fbcb89-jx8hc" WorkloadEndpoint="10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.091 [INFO][2616] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" Namespace="default" Pod="nginx-deployment-8587fbcb89-jx8hc" WorkloadEndpoint="10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.172 [INFO][2645] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" HandleID="k8s-pod-network.25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" Workload="10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.186 [INFO][2645] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" HandleID="k8s-pod-network.25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" Workload="10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fdcb0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.117", "pod":"nginx-deployment-8587fbcb89-jx8hc", "timestamp":"2025-03-17 17:49:54.172733728 +0000 UTC"}, Hostname:"10.0.0.117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.186 [INFO][2645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.219 [INFO][2645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.219 [INFO][2645] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.117' Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.288 [INFO][2645] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" host="10.0.0.117" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.293 [INFO][2645] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.117" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.303 [INFO][2645] ipam/ipam.go 489: Trying affinity for 192.168.28.128/26 host="10.0.0.117" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.307 [INFO][2645] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.128/26 host="10.0.0.117" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.312 [INFO][2645] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="10.0.0.117" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.312 [INFO][2645] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" host="10.0.0.117" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.314 [INFO][2645] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266 Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.319 [INFO][2645] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" host="10.0.0.117" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.329 [INFO][2645] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.130/26] block=192.168.28.128/26 handle="k8s-pod-network.25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" host="10.0.0.117" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.329 [INFO][2645] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.130/26] handle="k8s-pod-network.25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" host="10.0.0.117" Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.329 [INFO][2645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:54.350116 containerd[1466]: 2025-03-17 17:49:54.329 [INFO][2645] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.130/26] IPv6=[] ContainerID="25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" HandleID="k8s-pod-network.25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" Workload="10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0" Mar 17 17:49:54.350757 containerd[1466]: 2025-03-17 17:49:54.331 [INFO][2616] cni-plugin/k8s.go 386: Populated endpoint ContainerID="25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" Namespace="default" Pod="nginx-deployment-8587fbcb89-jx8hc" WorkloadEndpoint="10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"e6f5fd80-8c58-46d8-a963-ff380eae673d", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.117", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-jx8hc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.28.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid5e066da159", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:54.350757 containerd[1466]: 2025-03-17 17:49:54.331 [INFO][2616] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.130/32] ContainerID="25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" Namespace="default" Pod="nginx-deployment-8587fbcb89-jx8hc" WorkloadEndpoint="10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0" Mar 17 17:49:54.350757 containerd[1466]: 2025-03-17 17:49:54.331 [INFO][2616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5e066da159 ContainerID="25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" Namespace="default" Pod="nginx-deployment-8587fbcb89-jx8hc" WorkloadEndpoint="10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0" Mar 17 17:49:54.350757 containerd[1466]: 2025-03-17 17:49:54.334 [INFO][2616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" Namespace="default" Pod="nginx-deployment-8587fbcb89-jx8hc" WorkloadEndpoint="10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0" Mar 17 17:49:54.350757 containerd[1466]: 2025-03-17 17:49:54.334 [INFO][2616] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" Namespace="default" Pod="nginx-deployment-8587fbcb89-jx8hc" WorkloadEndpoint="10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"e6f5fd80-8c58-46d8-a963-ff380eae673d", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.117", ContainerID:"25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266", Pod:"nginx-deployment-8587fbcb89-jx8hc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.28.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid5e066da159", MAC:"96:eb:27:f3:10:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:54.350757 containerd[1466]: 2025-03-17 17:49:54.348 [INFO][2616] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266" Namespace="default" Pod="nginx-deployment-8587fbcb89-jx8hc" WorkloadEndpoint="10.0.0.117-k8s-nginx--deployment--8587fbcb89--jx8hc-eth0" Mar 17 17:49:54.366052 containerd[1466]: time="2025-03-17T17:49:54.365930573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:54.366163 containerd[1466]: time="2025-03-17T17:49:54.366056526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:54.366163 containerd[1466]: time="2025-03-17T17:49:54.366096202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:54.366647 containerd[1466]: time="2025-03-17T17:49:54.366593367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:54.385178 systemd[1]: Started cri-containerd-25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266.scope - libcontainer container 25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266. Mar 17 17:49:54.394428 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:49:54.409975 containerd[1466]: time="2025-03-17T17:49:54.409919828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-jx8hc,Uid:e6f5fd80-8c58-46d8-a963-ff380eae673d,Namespace:default,Attempt:2,} returns sandbox id \"25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266\"" Mar 17 17:49:54.802130 kernel: bpftool[2894]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:49:54.859807 kubelet[1775]: E0317 17:49:54.859764 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:54.948155 systemd-networkd[1407]: vxlan.calico: Link UP Mar 17 17:49:54.948160 systemd-networkd[1407]: vxlan.calico: Gained carrier Mar 17 17:49:55.040892 kubelet[1775]: I0317 17:49:55.040865 1775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:49:55.323447 containerd[1466]: time="2025-03-17T17:49:55.323394521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:55.324297 containerd[1466]: time="2025-03-17T17:49:55.324094390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 17 17:49:55.325086 containerd[1466]: time="2025-03-17T17:49:55.325050741Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:55.330176 containerd[1466]: time="2025-03-17T17:49:55.330133611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:55.330717 containerd[1466]: time="2025-03-17T17:49:55.330675956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.037765279s" Mar 17 17:49:55.330717 containerd[1466]: time="2025-03-17T17:49:55.330706180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 17 17:49:55.331992 containerd[1466]: time="2025-03-17T17:49:55.331963687Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:49:55.332764 containerd[1466]: time="2025-03-17T17:49:55.332738335Z" level=info msg="CreateContainer within sandbox \"8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:49:55.345496 containerd[1466]: time="2025-03-17T17:49:55.345440065Z" level=info msg="CreateContainer within sandbox \"8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1bfe7bbf89e6dd7734c3e067cad9a0b389fcb018911d3bb0abeb38c0df7b5049\"" Mar 17 17:49:55.346257 containerd[1466]: time="2025-03-17T17:49:55.345961314Z" level=info msg="StartContainer for \"1bfe7bbf89e6dd7734c3e067cad9a0b389fcb018911d3bb0abeb38c0df7b5049\"" Mar 17 17:49:55.371200 systemd[1]: Started cri-containerd-1bfe7bbf89e6dd7734c3e067cad9a0b389fcb018911d3bb0abeb38c0df7b5049.scope - libcontainer container 1bfe7bbf89e6dd7734c3e067cad9a0b389fcb018911d3bb0abeb38c0df7b5049. Mar 17 17:49:55.382729 systemd-networkd[1407]: calic7488026d87: Gained IPv6LL Mar 17 17:49:55.400939 containerd[1466]: time="2025-03-17T17:49:55.400881221Z" level=info msg="StartContainer for \"1bfe7bbf89e6dd7734c3e067cad9a0b389fcb018911d3bb0abeb38c0df7b5049\" returns successfully" Mar 17 17:49:55.575226 systemd-networkd[1407]: calid5e066da159: Gained IPv6LL Mar 17 17:49:55.860538 kubelet[1775]: E0317 17:49:55.860380 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:56.018705 systemd[1]: run-containerd-runc-k8s.io-26417d097cc5df84e78029f2293bf155f98730b60adcccb20f0c9fc0baafe1d1-runc.IIYlpU.mount: Deactivated successfully. Mar 17 17:49:56.471602 systemd-networkd[1407]: vxlan.calico: Gained IPv6LL Mar 17 17:49:56.861117 kubelet[1775]: E0317 17:49:56.860998 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:57.185165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount190373401.mount: Deactivated successfully. Mar 17 17:49:57.861205 kubelet[1775]: E0317 17:49:57.861161 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:58.146414 containerd[1466]: time="2025-03-17T17:49:58.146366012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:58.147426 containerd[1466]: time="2025-03-17T17:49:58.147383427Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69703867" Mar 17 17:49:58.148303 containerd[1466]: time="2025-03-17T17:49:58.148247881Z" level=info msg="ImageCreate event name:\"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:58.151621 containerd[1466]: time="2025-03-17T17:49:58.151570749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:58.152477 containerd[1466]: time="2025-03-17T17:49:58.152441087Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"69703745\" in 2.820446817s" Mar 17 17:49:58.152477 containerd[1466]: time="2025-03-17T17:49:58.152473024Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 17:49:58.154330 containerd[1466]: time="2025-03-17T17:49:58.154294021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:49:58.154985 containerd[1466]: time="2025-03-17T17:49:58.154955970Z" level=info msg="CreateContainer within sandbox \"25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 17:49:58.209841 containerd[1466]: time="2025-03-17T17:49:58.209797454Z" level=info msg="CreateContainer within sandbox \"25f4ddb36a526179a5abc3ada1a3d80ab69e2c8e95b3d049cf72602803cab266\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"be4173b2d8df9b3eea910ade3380a803d191efb0c36dd52aac825c4167ab5268\"" Mar 17 17:49:58.210556 containerd[1466]: time="2025-03-17T17:49:58.210480093Z" level=info msg="StartContainer for \"be4173b2d8df9b3eea910ade3380a803d191efb0c36dd52aac825c4167ab5268\"" Mar 17 17:49:58.288245 systemd[1]: Started cri-containerd-be4173b2d8df9b3eea910ade3380a803d191efb0c36dd52aac825c4167ab5268.scope - libcontainer container be4173b2d8df9b3eea910ade3380a803d191efb0c36dd52aac825c4167ab5268. Mar 17 17:49:58.313394 containerd[1466]: time="2025-03-17T17:49:58.312737637Z" level=info msg="StartContainer for \"be4173b2d8df9b3eea910ade3380a803d191efb0c36dd52aac825c4167ab5268\" returns successfully" Mar 17 17:49:58.861870 kubelet[1775]: E0317 17:49:58.861826 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:59.330431 containerd[1466]: time="2025-03-17T17:49:59.330225773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:59.331218 containerd[1466]: time="2025-03-17T17:49:59.330929256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 17 17:49:59.331899 containerd[1466]: time="2025-03-17T17:49:59.331867488Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:59.335068 containerd[1466]: time="2025-03-17T17:49:59.334097955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:59.335068 containerd[1466]: time="2025-03-17T17:49:59.335052514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.180729198s" Mar 17 17:49:59.335181 containerd[1466]: time="2025-03-17T17:49:59.335082368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 17 17:49:59.337214 containerd[1466]: time="2025-03-17T17:49:59.337183015Z" level=info msg="CreateContainer within sandbox \"8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:49:59.353053 containerd[1466]: time="2025-03-17T17:49:59.351670923Z" level=info msg="CreateContainer within sandbox \"8bd79ea2e698322aa269ed062f1b011ec4df7e2e9aaf4b32ca1f786c502b62ed\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4303601cf651f6b7b566bec315670a5073a577ddf15d43f4502f6b314a557ddd\"" Mar 17 17:49:59.353835 containerd[1466]: time="2025-03-17T17:49:59.353803624Z" level=info msg="StartContainer for \"4303601cf651f6b7b566bec315670a5073a577ddf15d43f4502f6b314a557ddd\"" Mar 17 17:49:59.393210 systemd[1]: Started cri-containerd-4303601cf651f6b7b566bec315670a5073a577ddf15d43f4502f6b314a557ddd.scope - libcontainer container 4303601cf651f6b7b566bec315670a5073a577ddf15d43f4502f6b314a557ddd. Mar 17 17:49:59.429784 containerd[1466]: time="2025-03-17T17:49:59.428275460Z" level=info msg="StartContainer for \"4303601cf651f6b7b566bec315670a5073a577ddf15d43f4502f6b314a557ddd\" returns successfully" Mar 17 17:49:59.852548 kubelet[1775]: E0317 17:49:59.852510 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:59.862797 kubelet[1775]: E0317 17:49:59.862762 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:00.017226 kubelet[1775]: I0317 17:50:00.015246 1775 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:50:00.017226 kubelet[1775]: I0317 17:50:00.015287 1775 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:50:00.067682 kubelet[1775]: I0317 17:50:00.067612 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-jx8hc" podStartSLOduration=5.324955674 podStartE2EDuration="9.067593676s" podCreationTimestamp="2025-03-17 17:49:51 +0000 UTC" firstStartedPulling="2025-03-17 17:49:54.41099191 +0000 UTC m=+15.550259797" lastFinishedPulling="2025-03-17 17:49:58.153629912 +0000 UTC m=+19.292897799" observedRunningTime="2025-03-17 17:49:59.059575327 +0000 UTC m=+20.198843214" watchObservedRunningTime="2025-03-17 17:50:00.067593676 +0000 UTC m=+21.206861563" Mar 17 17:50:00.067861 kubelet[1775]: I0317 17:50:00.067826 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zrt5z" podStartSLOduration=15.024713818 podStartE2EDuration="20.067821127s" podCreationTimestamp="2025-03-17 17:49:40 +0000 UTC" firstStartedPulling="2025-03-17 17:49:54.292661655 +0000 UTC m=+15.431929502" lastFinishedPulling="2025-03-17 17:49:59.335768924 +0000 UTC m=+20.475036811" observedRunningTime="2025-03-17 17:50:00.067436492 +0000 UTC m=+21.206704379" watchObservedRunningTime="2025-03-17 17:50:00.067821127 +0000 UTC m=+21.207089014" Mar 17 17:50:00.863737 kubelet[1775]: E0317 17:50:00.863687 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:01.864094 kubelet[1775]: E0317 17:50:01.863929 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:02.864973 kubelet[1775]: E0317 17:50:02.864921 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:03.866017 kubelet[1775]: E0317 17:50:03.865942 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:04.364187 systemd[1]: Created slice kubepods-besteffort-podcefd1a0c_6dbc_46f0_8c94_0fc56a7f95b4.slice - libcontainer container kubepods-besteffort-podcefd1a0c_6dbc_46f0_8c94_0fc56a7f95b4.slice. Mar 17 17:50:04.410118 kubelet[1775]: I0317 17:50:04.409965 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/cefd1a0c-6dbc-46f0-8c94-0fc56a7f95b4-data\") pod \"nfs-server-provisioner-0\" (UID: \"cefd1a0c-6dbc-46f0-8c94-0fc56a7f95b4\") " pod="default/nfs-server-provisioner-0" Mar 17 17:50:04.410118 kubelet[1775]: I0317 17:50:04.410008 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrttb\" (UniqueName: \"kubernetes.io/projected/cefd1a0c-6dbc-46f0-8c94-0fc56a7f95b4-kube-api-access-rrttb\") pod \"nfs-server-provisioner-0\" (UID: \"cefd1a0c-6dbc-46f0-8c94-0fc56a7f95b4\") " pod="default/nfs-server-provisioner-0" Mar 17 17:50:04.670566 containerd[1466]: time="2025-03-17T17:50:04.670018592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cefd1a0c-6dbc-46f0-8c94-0fc56a7f95b4,Namespace:default,Attempt:0,}" Mar 17 17:50:04.867008 kubelet[1775]: E0317 17:50:04.866952 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:04.942604 systemd-networkd[1407]: cali60e51b789ff: Link UP Mar 17 17:50:04.943339 systemd-networkd[1407]: cali60e51b789ff: Gained carrier Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.731 [INFO][3217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.117-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default cefd1a0c-6dbc-46f0-8c94-0fc56a7f95b4 1054 0 2025-03-17 17:50:04 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.117 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.117-k8s-nfs--server--provisioner--0-" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.731 [INFO][3217] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.117-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.766 [INFO][3230] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" HandleID="k8s-pod-network.01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" Workload="10.0.0.117-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.878 [INFO][3230] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" HandleID="k8s-pod-network.01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" Workload="10.0.0.117-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a9280), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.117", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-17 17:50:04.766200555 +0000 UTC"}, Hostname:"10.0.0.117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.878 [INFO][3230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.878 [INFO][3230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.878 [INFO][3230] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.117' Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.882 [INFO][3230] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" host="10.0.0.117" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.885 [INFO][3230] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.117" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.890 [INFO][3230] ipam/ipam.go 489: Trying affinity for 192.168.28.128/26 host="10.0.0.117" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.892 [INFO][3230] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.128/26 host="10.0.0.117" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.894 [INFO][3230] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="10.0.0.117" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.894 [INFO][3230] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" host="10.0.0.117" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.897 [INFO][3230] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459 Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.908 [INFO][3230] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" host="10.0.0.117" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.934 [INFO][3230] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.131/26] block=192.168.28.128/26 handle="k8s-pod-network.01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" host="10.0.0.117" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.935 [INFO][3230] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.131/26] handle="k8s-pod-network.01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" host="10.0.0.117" Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.935 [INFO][3230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:50:04.968970 containerd[1466]: 2025-03-17 17:50:04.935 [INFO][3230] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.131/26] IPv6=[] ContainerID="01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" HandleID="k8s-pod-network.01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" Workload="10.0.0.117-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:04.971278 containerd[1466]: 2025-03-17 17:50:04.936 [INFO][3217] cni-plugin/k8s.go 386: Populated endpoint ContainerID="01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.117-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.117-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"cefd1a0c-6dbc-46f0-8c94-0fc56a7f95b4", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.117", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.28.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:50:04.971278 containerd[1466]: 2025-03-17 17:50:04.937 [INFO][3217] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.131/32] ContainerID="01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.117-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:04.971278 containerd[1466]: 2025-03-17 17:50:04.937 [INFO][3217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.117-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:04.971278 containerd[1466]: 2025-03-17 17:50:04.943 [INFO][3217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.117-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:04.971421 containerd[1466]: 2025-03-17 17:50:04.944 [INFO][3217] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.117-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.117-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"cefd1a0c-6dbc-46f0-8c94-0fc56a7f95b4", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.117", ContainerID:"01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.28.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"e2:0a:d4:8e:9a:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:50:04.971421 containerd[1466]: 2025-03-17 17:50:04.964 [INFO][3217] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.117-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:05.005762 containerd[1466]: time="2025-03-17T17:50:05.005678179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:05.005932 containerd[1466]: time="2025-03-17T17:50:05.005780006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:05.005932 containerd[1466]: time="2025-03-17T17:50:05.005803012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:05.006080 containerd[1466]: time="2025-03-17T17:50:05.005923243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:05.027205 systemd[1]: Started cri-containerd-01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459.scope - libcontainer container 01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459. Mar 17 17:50:05.045468 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:50:05.119238 containerd[1466]: time="2025-03-17T17:50:05.119081919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cefd1a0c-6dbc-46f0-8c94-0fc56a7f95b4,Namespace:default,Attempt:0,} returns sandbox id \"01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459\"" Mar 17 17:50:05.121583 containerd[1466]: time="2025-03-17T17:50:05.121478350Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 17:50:05.867180 kubelet[1775]: E0317 17:50:05.867097 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:06.008841 systemd-networkd[1407]: cali60e51b789ff: Gained IPv6LL Mar 17 17:50:06.801504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1060957933.mount: Deactivated successfully. Mar 17 17:50:06.867780 kubelet[1775]: E0317 17:50:06.867737 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:07.868212 kubelet[1775]: E0317 17:50:07.868095 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:08.153761 containerd[1466]: time="2025-03-17T17:50:08.153619461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:08.181530 containerd[1466]: time="2025-03-17T17:50:08.181074497Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Mar 17 17:50:08.190657 containerd[1466]: time="2025-03-17T17:50:08.189201867Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:08.232725 containerd[1466]: time="2025-03-17T17:50:08.232671029Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.111145427s" Mar 17 17:50:08.232967 containerd[1466]: time="2025-03-17T17:50:08.232946010Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Mar 17 17:50:08.233171 containerd[1466]: time="2025-03-17T17:50:08.232914403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:08.236291 containerd[1466]: time="2025-03-17T17:50:08.236261589Z" level=info msg="CreateContainer within sandbox \"01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 17:50:08.256535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount841144692.mount: Deactivated successfully. Mar 17 17:50:08.261373 containerd[1466]: time="2025-03-17T17:50:08.261327732Z" level=info msg="CreateContainer within sandbox \"01962c19b583737a652f310baedae1fe6c60e865247d041adafd09788998a459\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"bfd939a5b89f1a5e9aded79a58135e15315b8d373d415c36ae11db949602a6ce\"" Mar 17 17:50:08.262083 containerd[1466]: time="2025-03-17T17:50:08.262055534Z" level=info msg="StartContainer for \"bfd939a5b89f1a5e9aded79a58135e15315b8d373d415c36ae11db949602a6ce\"" Mar 17 17:50:08.298231 systemd[1]: Started cri-containerd-bfd939a5b89f1a5e9aded79a58135e15315b8d373d415c36ae11db949602a6ce.scope - libcontainer container bfd939a5b89f1a5e9aded79a58135e15315b8d373d415c36ae11db949602a6ce. Mar 17 17:50:08.320219 containerd[1466]: time="2025-03-17T17:50:08.320181881Z" level=info msg="StartContainer for \"bfd939a5b89f1a5e9aded79a58135e15315b8d373d415c36ae11db949602a6ce\" returns successfully" Mar 17 17:50:08.868986 kubelet[1775]: E0317 17:50:08.868940 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:09.870123 kubelet[1775]: E0317 17:50:09.870079 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:10.870945 kubelet[1775]: E0317 17:50:10.870898 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:11.871681 kubelet[1775]: E0317 17:50:11.871635 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:12.872314 kubelet[1775]: E0317 17:50:12.872266 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:13.872426 kubelet[1775]: E0317 17:50:13.872379 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:14.873006 kubelet[1775]: E0317 17:50:14.872952 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:15.873821 kubelet[1775]: E0317 17:50:15.873777 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:16.102710 update_engine[1447]: I20250317 17:50:16.102092 1447 update_attempter.cc:509] Updating boot flags... Mar 17 17:50:16.151239 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3423) Mar 17 17:50:16.188142 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3427) Mar 17 17:50:16.874665 kubelet[1775]: E0317 17:50:16.874611 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:17.874995 kubelet[1775]: E0317 17:50:17.874944 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:18.183311 kubelet[1775]: I0317 17:50:18.182979 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.068991267 podStartE2EDuration="14.182962267s" podCreationTimestamp="2025-03-17 17:50:04 +0000 UTC" firstStartedPulling="2025-03-17 17:50:05.120952531 +0000 UTC m=+26.260220378" lastFinishedPulling="2025-03-17 17:50:08.234923491 +0000 UTC m=+29.374191378" observedRunningTime="2025-03-17 17:50:09.092699684 +0000 UTC m=+30.231967571" watchObservedRunningTime="2025-03-17 17:50:18.182962267 +0000 UTC m=+39.322230154" Mar 17 17:50:18.193772 systemd[1]: Created slice kubepods-besteffort-pod37cc3576_bf2a_4396_9bd8_655aab4c951a.slice - libcontainer container kubepods-besteffort-pod37cc3576_bf2a_4396_9bd8_655aab4c951a.slice. Mar 17 17:50:18.292187 kubelet[1775]: I0317 17:50:18.292135 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-33411c7b-af3a-4708-bfc1-20999f516e30\" (UniqueName: \"kubernetes.io/nfs/37cc3576-bf2a-4396-9bd8-655aab4c951a-pvc-33411c7b-af3a-4708-bfc1-20999f516e30\") pod \"test-pod-1\" (UID: \"37cc3576-bf2a-4396-9bd8-655aab4c951a\") " pod="default/test-pod-1" Mar 17 17:50:18.292187 kubelet[1775]: I0317 17:50:18.292180 1775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcpff\" (UniqueName: \"kubernetes.io/projected/37cc3576-bf2a-4396-9bd8-655aab4c951a-kube-api-access-jcpff\") pod \"test-pod-1\" (UID: \"37cc3576-bf2a-4396-9bd8-655aab4c951a\") " pod="default/test-pod-1" Mar 17 17:50:18.411051 kernel: FS-Cache: Loaded Mar 17 17:50:18.434403 kernel: RPC: Registered named UNIX socket transport module. Mar 17 17:50:18.434524 kernel: RPC: Registered udp transport module. Mar 17 17:50:18.434539 kernel: RPC: Registered tcp transport module. Mar 17 17:50:18.435598 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 17:50:18.436299 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 17:50:18.595235 kernel: NFS: Registering the id_resolver key type Mar 17 17:50:18.595342 kernel: Key type id_resolver registered Mar 17 17:50:18.595359 kernel: Key type id_legacy registered Mar 17 17:50:18.617545 nfsidmap[3445]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 17:50:18.621139 nfsidmap[3446]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 17:50:18.796867 containerd[1466]: time="2025-03-17T17:50:18.796757626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:37cc3576-bf2a-4396-9bd8-655aab4c951a,Namespace:default,Attempt:0,}" Mar 17 17:50:18.875147 kubelet[1775]: E0317 17:50:18.875102 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:18.959907 systemd-networkd[1407]: cali5ec59c6bf6e: Link UP Mar 17 17:50:18.960484 systemd-networkd[1407]: cali5ec59c6bf6e: Gained carrier Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.872 [INFO][3447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.117-k8s-test--pod--1-eth0 default 37cc3576-bf2a-4396-9bd8-655aab4c951a 1116 0 2025-03-17 17:50:04 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.117 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.117-k8s-test--pod--1-" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.872 [INFO][3447] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.117-k8s-test--pod--1-eth0" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.902 [INFO][3460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" HandleID="k8s-pod-network.79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" Workload="10.0.0.117-k8s-test--pod--1-eth0" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.916 [INFO][3460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" HandleID="k8s-pod-network.79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" Workload="10.0.0.117-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000432b10), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.117", "pod":"test-pod-1", "timestamp":"2025-03-17 17:50:18.902643167 +0000 UTC"}, Hostname:"10.0.0.117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.916 [INFO][3460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.917 [INFO][3460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.917 [INFO][3460] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.117' Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.919 [INFO][3460] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" host="10.0.0.117" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.925 [INFO][3460] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.117" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.933 [INFO][3460] ipam/ipam.go 489: Trying affinity for 192.168.28.128/26 host="10.0.0.117" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.936 [INFO][3460] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.128/26 host="10.0.0.117" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.939 [INFO][3460] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="10.0.0.117" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.939 [INFO][3460] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" host="10.0.0.117" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.942 [INFO][3460] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6 Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.947 [INFO][3460] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" host="10.0.0.117" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.955 [INFO][3460] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.132/26] block=192.168.28.128/26 handle="k8s-pod-network.79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" host="10.0.0.117" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.955 [INFO][3460] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.132/26] handle="k8s-pod-network.79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" host="10.0.0.117" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.956 [INFO][3460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.956 [INFO][3460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.132/26] IPv6=[] ContainerID="79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" HandleID="k8s-pod-network.79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" Workload="10.0.0.117-k8s-test--pod--1-eth0" Mar 17 17:50:18.971273 containerd[1466]: 2025-03-17 17:50:18.957 [INFO][3447] cni-plugin/k8s.go 386: Populated endpoint ContainerID="79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.117-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.117-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"37cc3576-bf2a-4396-9bd8-655aab4c951a", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.117", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.28.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:50:18.975735 containerd[1466]: 2025-03-17 17:50:18.957 [INFO][3447] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.132/32] ContainerID="79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.117-k8s-test--pod--1-eth0" Mar 17 17:50:18.975735 containerd[1466]: 2025-03-17 17:50:18.958 [INFO][3447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.117-k8s-test--pod--1-eth0" Mar 17 17:50:18.975735 containerd[1466]: 2025-03-17 17:50:18.960 [INFO][3447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.117-k8s-test--pod--1-eth0" Mar 17 17:50:18.975735 containerd[1466]: 2025-03-17 17:50:18.961 [INFO][3447] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.117-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.117-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"37cc3576-bf2a-4396-9bd8-655aab4c951a", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.117", ContainerID:"79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.28.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"b6:8f:99:15:84:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:50:18.975735 containerd[1466]: 2025-03-17 17:50:18.968 [INFO][3447] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.117-k8s-test--pod--1-eth0" Mar 17 17:50:18.998520 containerd[1466]: time="2025-03-17T17:50:18.998259269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:18.998520 containerd[1466]: time="2025-03-17T17:50:18.998339999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:18.998520 containerd[1466]: time="2025-03-17T17:50:18.998355441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:18.998520 containerd[1466]: time="2025-03-17T17:50:18.998450454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:19.015260 systemd[1]: Started cri-containerd-79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6.scope - libcontainer container 79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6. Mar 17 17:50:19.028555 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:50:19.046710 containerd[1466]: time="2025-03-17T17:50:19.046669475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:37cc3576-bf2a-4396-9bd8-655aab4c951a,Namespace:default,Attempt:0,} returns sandbox id \"79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6\"" Mar 17 17:50:19.048864 containerd[1466]: time="2025-03-17T17:50:19.048457581Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:50:19.342085 containerd[1466]: time="2025-03-17T17:50:19.341948657Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:19.342616 containerd[1466]: time="2025-03-17T17:50:19.342557414Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 17:50:19.346151 containerd[1466]: time="2025-03-17T17:50:19.346112143Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"69703745\" in 297.620717ms" Mar 17 17:50:19.346151 containerd[1466]: time="2025-03-17T17:50:19.346150307Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 17:50:19.348189 containerd[1466]: time="2025-03-17T17:50:19.348081551Z" level=info msg="CreateContainer within sandbox \"79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 17:50:19.358410 containerd[1466]: time="2025-03-17T17:50:19.358358888Z" level=info msg="CreateContainer within sandbox \"79588aed7c11a3679eff2fce5cc90a42d1cab6c4c2ac2023e37e37d01d5065d6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"12f4dfc82b4aef861b8e7c32a6094c8029a3da9fc184fb156a9c0024047848b7\"" Mar 17 17:50:19.359829 containerd[1466]: time="2025-03-17T17:50:19.358763019Z" level=info msg="StartContainer for \"12f4dfc82b4aef861b8e7c32a6094c8029a3da9fc184fb156a9c0024047848b7\"" Mar 17 17:50:19.387229 systemd[1]: Started cri-containerd-12f4dfc82b4aef861b8e7c32a6094c8029a3da9fc184fb156a9c0024047848b7.scope - libcontainer container 12f4dfc82b4aef861b8e7c32a6094c8029a3da9fc184fb156a9c0024047848b7. Mar 17 17:50:19.418720 containerd[1466]: time="2025-03-17T17:50:19.418581528Z" level=info msg="StartContainer for \"12f4dfc82b4aef861b8e7c32a6094c8029a3da9fc184fb156a9c0024047848b7\" returns successfully" Mar 17 17:50:19.852332 kubelet[1775]: E0317 17:50:19.852285 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:19.875622 kubelet[1775]: E0317 17:50:19.875580 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:20.121613 kubelet[1775]: I0317 17:50:20.121397 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.822707344 podStartE2EDuration="16.121380354s" podCreationTimestamp="2025-03-17 17:50:04 +0000 UTC" firstStartedPulling="2025-03-17 17:50:19.048188547 +0000 UTC m=+40.187456434" lastFinishedPulling="2025-03-17 17:50:19.346861597 +0000 UTC m=+40.486129444" observedRunningTime="2025-03-17 17:50:20.121059075 +0000 UTC m=+41.260327002" watchObservedRunningTime="2025-03-17 17:50:20.121380354 +0000 UTC m=+41.260648241" Mar 17 17:50:20.876208 kubelet[1775]: E0317 17:50:20.876147 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:20.982199 systemd-networkd[1407]: cali5ec59c6bf6e: Gained IPv6LL Mar 17 17:50:21.877065 kubelet[1775]: E0317 17:50:21.876989 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"