Apr 30 00:59:09.923664 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:59:09.923686 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:59:09.923696 kernel: KASLR enabled Apr 30 00:59:09.923702 kernel: efi: EFI v2.7 by EDK II Apr 30 00:59:09.923708 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Apr 30 00:59:09.923714 kernel: random: crng init done Apr 30 00:59:09.923721 kernel: ACPI: Early table checksum verification disabled Apr 30 00:59:09.923727 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Apr 30 00:59:09.923734 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Apr 30 00:59:09.923741 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:59:09.923747 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:59:09.923753 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:59:09.923759 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:59:09.923765 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:59:09.923773 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:59:09.923781 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:59:09.923787 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:59:09.923794 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:59:09.923800 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Apr 30 00:59:09.923806 kernel: NUMA: Failed to initialise from firmware Apr 30 00:59:09.923813 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:59:09.923820 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Apr 30 00:59:09.923826 kernel: Zone ranges: Apr 30 00:59:09.923832 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:59:09.923838 kernel: DMA32 empty Apr 30 00:59:09.923846 kernel: Normal empty Apr 30 00:59:09.923852 kernel: Movable zone start for each node Apr 30 00:59:09.923858 kernel: Early memory node ranges Apr 30 00:59:09.923865 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Apr 30 00:59:09.923871 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Apr 30 00:59:09.923877 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Apr 30 00:59:09.923884 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Apr 30 00:59:09.923890 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Apr 30 00:59:09.923896 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Apr 30 00:59:09.923902 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Apr 30 00:59:09.923909 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:59:09.923915 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Apr 30 00:59:09.923932 kernel: psci: probing for conduit method from ACPI. Apr 30 00:59:09.923938 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:59:09.923945 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:59:09.923954 kernel: psci: Trusted OS migration not required Apr 30 00:59:09.923961 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:59:09.923968 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 00:59:09.923977 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:59:09.923984 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:59:09.923991 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Apr 30 00:59:09.923998 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:59:09.924004 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:59:09.924011 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:59:09.924018 kernel: CPU features: detected: Spectre-v4 Apr 30 00:59:09.924039 kernel: CPU features: detected: Spectre-BHB Apr 30 00:59:09.924046 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:59:09.924053 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:59:09.924061 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:59:09.924068 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:59:09.924075 kernel: alternatives: applying boot alternatives Apr 30 00:59:09.924088 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:59:09.924096 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:59:09.924103 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:59:09.924110 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:59:09.924116 kernel: Fallback order for Node 0: 0 Apr 30 00:59:09.924123 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Apr 30 00:59:09.924130 kernel: Policy zone: DMA Apr 30 00:59:09.924137 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:59:09.924145 kernel: software IO TLB: area num 4. Apr 30 00:59:09.924153 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Apr 30 00:59:09.924160 kernel: Memory: 2386464K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185824K reserved, 0K cma-reserved) Apr 30 00:59:09.924167 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 00:59:09.924174 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:59:09.924182 kernel: rcu: RCU event tracing is enabled. Apr 30 00:59:09.924189 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 00:59:09.924196 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:59:09.924203 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:59:09.924210 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:59:09.924216 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 00:59:09.924223 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:59:09.924231 kernel: GICv3: 256 SPIs implemented Apr 30 00:59:09.924238 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:59:09.924245 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:59:09.924252 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:59:09.924259 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 00:59:09.924265 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 00:59:09.924272 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:59:09.924279 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:59:09.924286 kernel: GICv3: using LPI property table @0x00000000400f0000 Apr 30 00:59:09.924293 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Apr 30 00:59:09.924300 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:59:09.924308 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:59:09.924315 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:59:09.924323 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:59:09.924330 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:59:09.924337 kernel: arm-pv: using stolen time PV Apr 30 00:59:09.924344 kernel: Console: colour dummy device 80x25 Apr 30 00:59:09.924351 kernel: ACPI: Core revision 20230628 Apr 30 00:59:09.924359 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:59:09.924366 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:59:09.924373 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:59:09.924381 kernel: landlock: Up and running. Apr 30 00:59:09.924388 kernel: SELinux: Initializing. Apr 30 00:59:09.924396 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:59:09.924403 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:59:09.924410 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:59:09.924418 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:59:09.924425 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:59:09.924432 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:59:09.924439 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 00:59:09.924450 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 00:59:09.924457 kernel: Remapping and enabling EFI services. Apr 30 00:59:09.924464 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:59:09.924471 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:59:09.924478 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 00:59:09.924485 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Apr 30 00:59:09.924493 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:59:09.924499 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:59:09.924506 kernel: Detected PIPT I-cache on CPU2 Apr 30 00:59:09.924515 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Apr 30 00:59:09.924522 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Apr 30 00:59:09.924529 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:59:09.924541 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Apr 30 00:59:09.924550 kernel: Detected PIPT I-cache on CPU3 Apr 30 00:59:09.924557 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Apr 30 00:59:09.924565 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Apr 30 00:59:09.924575 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:59:09.924582 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Apr 30 00:59:09.924589 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 00:59:09.924602 kernel: SMP: Total of 4 processors activated. Apr 30 00:59:09.924611 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:59:09.924619 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:59:09.924627 kernel: CPU features: detected: Common not Private translations Apr 30 00:59:09.924634 kernel: CPU features: detected: CRC32 instructions Apr 30 00:59:09.924642 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 00:59:09.924649 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:59:09.924660 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:59:09.924667 kernel: CPU features: detected: Privileged Access Never Apr 30 00:59:09.924674 kernel: CPU features: detected: RAS Extension Support Apr 30 00:59:09.924682 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 00:59:09.924689 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:59:09.924697 kernel: alternatives: applying system-wide alternatives Apr 30 00:59:09.924704 kernel: devtmpfs: initialized Apr 30 00:59:09.924712 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:59:09.924721 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 00:59:09.924731 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:59:09.924741 kernel: SMBIOS 3.0.0 present. Apr 30 00:59:09.924750 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Apr 30 00:59:09.924758 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:59:09.924765 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:59:09.924773 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:59:09.924780 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:59:09.924788 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:59:09.924796 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 Apr 30 00:59:09.924804 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:59:09.924811 kernel: cpuidle: using governor menu Apr 30 00:59:09.924819 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:59:09.924826 kernel: ASID allocator initialised with 32768 entries Apr 30 00:59:09.924833 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:59:09.924840 kernel: Serial: AMBA PL011 UART driver Apr 30 00:59:09.924848 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:59:09.924855 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:59:09.924862 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:59:09.924870 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:59:09.924878 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:59:09.924886 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:59:09.924893 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:59:09.924900 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:59:09.924907 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:59:09.924915 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:59:09.924935 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:59:09.924943 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:59:09.924953 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:59:09.924965 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:59:09.924973 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:59:09.924980 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:59:09.924987 kernel: ACPI: Interpreter enabled Apr 30 00:59:09.924994 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:59:09.925002 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:59:09.925009 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:59:09.925016 kernel: printk: console [ttyAMA0] enabled Apr 30 00:59:09.925024 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:59:09.925172 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:59:09.925249 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:59:09.925316 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:59:09.925385 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 00:59:09.925451 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 00:59:09.925461 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 00:59:09.925469 kernel: PCI host bridge to bus 0000:00 Apr 30 00:59:09.925542 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 00:59:09.925602 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:59:09.925662 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 00:59:09.925723 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:59:09.925805 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 00:59:09.925883 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 00:59:09.925989 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Apr 30 00:59:09.926057 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Apr 30 00:59:09.926209 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:59:09.926284 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:59:09.926352 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Apr 30 00:59:09.926419 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Apr 30 00:59:09.926482 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 00:59:09.926546 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:59:09.926605 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 00:59:09.926615 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:59:09.926623 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:59:09.926630 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:59:09.926637 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:59:09.926645 kernel: iommu: Default domain type: Translated Apr 30 00:59:09.926652 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:59:09.926662 kernel: efivars: Registered efivars operations Apr 30 00:59:09.926669 kernel: vgaarb: loaded Apr 30 00:59:09.926676 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:59:09.926684 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:59:09.926691 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:59:09.926698 kernel: pnp: PnP ACPI init Apr 30 00:59:09.926776 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 00:59:09.926786 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:59:09.926794 kernel: NET: Registered PF_INET protocol family Apr 30 00:59:09.926804 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:59:09.926811 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:59:09.926819 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:59:09.926826 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:59:09.926834 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:59:09.926841 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:59:09.926849 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:59:09.926856 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:59:09.926865 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:59:09.926872 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:59:09.926880 kernel: kvm [1]: HYP mode not available Apr 30 00:59:09.926887 kernel: Initialise system trusted keyrings Apr 30 00:59:09.926894 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:59:09.926901 kernel: Key type asymmetric registered Apr 30 00:59:09.926908 kernel: Asymmetric key parser 'x509' registered Apr 30 00:59:09.926916 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:59:09.926940 kernel: io scheduler mq-deadline registered Apr 30 00:59:09.926947 kernel: io scheduler kyber registered Apr 30 00:59:09.926957 kernel: io scheduler bfq registered Apr 30 00:59:09.926964 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:59:09.926972 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:59:09.926980 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:59:09.927055 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Apr 30 00:59:09.927065 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:59:09.927072 kernel: thunder_xcv, ver 1.0 Apr 30 00:59:09.927087 kernel: thunder_bgx, ver 1.0 Apr 30 00:59:09.927095 kernel: nicpf, ver 1.0 Apr 30 00:59:09.927105 kernel: nicvf, ver 1.0 Apr 30 00:59:09.927187 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:59:09.927252 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:59:09 UTC (1745974749) Apr 30 00:59:09.927262 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:59:09.927270 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 00:59:09.927277 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:59:09.927285 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:59:09.927292 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:59:09.927302 kernel: Segment Routing with IPv6 Apr 30 00:59:09.927310 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:59:09.927317 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:59:09.927324 kernel: Key type dns_resolver registered Apr 30 00:59:09.927331 kernel: registered taskstats version 1 Apr 30 00:59:09.927339 kernel: Loading compiled-in X.509 certificates Apr 30 00:59:09.927346 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:59:09.927354 kernel: Key type .fscrypt registered Apr 30 00:59:09.927361 kernel: Key type fscrypt-provisioning registered Apr 30 00:59:09.927370 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:59:09.927382 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:59:09.927390 kernel: ima: No architecture policies found Apr 30 00:59:09.927400 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:59:09.927408 kernel: clk: Disabling unused clocks Apr 30 00:59:09.927415 kernel: Freeing unused kernel memory: 39424K Apr 30 00:59:09.927423 kernel: Run /init as init process Apr 30 00:59:09.927430 kernel: with arguments: Apr 30 00:59:09.927437 kernel: /init Apr 30 00:59:09.927445 kernel: with environment: Apr 30 00:59:09.927453 kernel: HOME=/ Apr 30 00:59:09.927460 kernel: TERM=linux Apr 30 00:59:09.927467 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:59:09.927476 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:59:09.927485 systemd[1]: Detected virtualization kvm. Apr 30 00:59:09.927493 systemd[1]: Detected architecture arm64. Apr 30 00:59:09.927502 systemd[1]: Running in initrd. Apr 30 00:59:09.927510 systemd[1]: No hostname configured, using default hostname. Apr 30 00:59:09.927517 systemd[1]: Hostname set to . Apr 30 00:59:09.927525 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:59:09.927533 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:59:09.927541 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:59:09.927549 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:59:09.927558 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:59:09.927567 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:59:09.927575 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:59:09.927583 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:59:09.927593 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:59:09.927601 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:59:09.927609 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:59:09.927617 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:59:09.927626 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:59:09.927634 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:59:09.927642 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:59:09.927650 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:59:09.927658 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:59:09.927665 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:59:09.927673 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:59:09.927682 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:59:09.927689 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:59:09.927699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:59:09.927707 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:59:09.927714 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:59:09.927722 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:59:09.927730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:59:09.927738 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:59:09.927746 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:59:09.927754 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:59:09.927763 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:59:09.927771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:59:09.927779 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:59:09.927787 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:59:09.927795 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:59:09.927818 systemd-journald[239]: Collecting audit messages is disabled. Apr 30 00:59:09.927839 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:59:09.927847 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:59:09.927855 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:59:09.927865 systemd-journald[239]: Journal started Apr 30 00:59:09.927883 systemd-journald[239]: Runtime Journal (/run/log/journal/ee8dabcac86d4c0498885f59f7e7bc78) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:59:09.914042 systemd-modules-load[240]: Inserted module 'overlay' Apr 30 00:59:09.931609 systemd-modules-load[240]: Inserted module 'br_netfilter' Apr 30 00:59:09.933341 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:59:09.933363 kernel: Bridge firewalling registered Apr 30 00:59:09.932843 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:59:09.934602 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:59:09.949149 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:59:09.951086 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:59:09.953087 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:59:09.955410 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:59:09.965493 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:59:09.967170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:59:09.969548 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:59:09.979174 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:59:09.980450 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:59:09.983652 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:59:09.999503 dracut-cmdline[280]: dracut-dracut-053 Apr 30 00:59:10.002353 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:59:10.007371 systemd-resolved[277]: Positive Trust Anchors: Apr 30 00:59:10.007389 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:59:10.007420 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:59:10.012411 systemd-resolved[277]: Defaulting to hostname 'linux'. Apr 30 00:59:10.013550 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:59:10.019155 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:59:10.085966 kernel: SCSI subsystem initialized Apr 30 00:59:10.093958 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:59:10.101959 kernel: iscsi: registered transport (tcp) Apr 30 00:59:10.122965 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:59:10.123030 kernel: QLogic iSCSI HBA Driver Apr 30 00:59:10.172695 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:59:10.185154 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:59:10.204450 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:59:10.204513 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:59:10.206244 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:59:10.253960 kernel: raid6: neonx8 gen() 15525 MB/s Apr 30 00:59:10.270973 kernel: raid6: neonx4 gen() 14465 MB/s Apr 30 00:59:10.287962 kernel: raid6: neonx2 gen() 13007 MB/s Apr 30 00:59:10.306183 kernel: raid6: neonx1 gen() 10417 MB/s Apr 30 00:59:10.323134 kernel: raid6: int64x8 gen() 6962 MB/s Apr 30 00:59:10.339022 kernel: raid6: int64x4 gen() 7319 MB/s Apr 30 00:59:10.355962 kernel: raid6: int64x2 gen() 6140 MB/s Apr 30 00:59:10.379125 kernel: raid6: int64x1 gen() 5049 MB/s Apr 30 00:59:10.379195 kernel: raid6: using algorithm neonx8 gen() 15525 MB/s Apr 30 00:59:10.391105 kernel: raid6: .... xor() 11923 MB/s, rmw enabled Apr 30 00:59:10.391170 kernel: raid6: using neon recovery algorithm Apr 30 00:59:10.396959 kernel: xor: measuring software checksum speed Apr 30 00:59:10.397016 kernel: 8regs : 16995 MB/sec Apr 30 00:59:10.398190 kernel: 32regs : 19608 MB/sec Apr 30 00:59:10.399464 kernel: arm64_neon : 26883 MB/sec Apr 30 00:59:10.399484 kernel: xor: using function: arm64_neon (26883 MB/sec) Apr 30 00:59:10.453968 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:59:10.467224 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:59:10.477225 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:59:10.490529 systemd-udevd[461]: Using default interface naming scheme 'v255'. Apr 30 00:59:10.493956 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:59:10.512227 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:59:10.526841 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Apr 30 00:59:10.557717 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:59:10.574157 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:59:10.621266 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:59:10.631135 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:59:10.641956 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:59:10.645201 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:59:10.646564 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:59:10.650260 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:59:10.660121 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:59:10.672518 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:59:10.676167 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Apr 30 00:59:10.690869 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 00:59:10.691000 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:59:10.691012 kernel: GPT:9289727 != 19775487 Apr 30 00:59:10.691022 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:59:10.691040 kernel: GPT:9289727 != 19775487 Apr 30 00:59:10.691049 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:59:10.691059 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:59:10.682352 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:59:10.682473 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:59:10.685907 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:59:10.689758 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:59:10.689941 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:59:10.691138 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:59:10.701505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:59:10.712698 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (518) Apr 30 00:59:10.712749 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (516) Apr 30 00:59:10.716128 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 00:59:10.717589 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:59:10.725440 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 00:59:10.738584 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 00:59:10.739898 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 00:59:10.746927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:59:10.764102 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:59:10.766123 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:59:10.771565 disk-uuid[553]: Primary Header is updated. Apr 30 00:59:10.771565 disk-uuid[553]: Secondary Entries is updated. Apr 30 00:59:10.771565 disk-uuid[553]: Secondary Header is updated. Apr 30 00:59:10.774934 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:59:10.804797 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:59:11.811947 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:59:11.812228 disk-uuid[554]: The operation has completed successfully. Apr 30 00:59:11.831134 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:59:11.831251 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:59:11.859132 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:59:11.863264 sh[572]: Success Apr 30 00:59:11.874934 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:59:11.905850 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:59:11.918828 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:59:11.922738 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:59:11.933540 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:59:11.933583 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:59:11.933594 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:59:11.935933 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:59:11.935950 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:59:11.940020 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:59:11.941547 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:59:11.956135 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:59:11.958221 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:59:11.966185 kernel: BTRFS info (device vda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:59:11.966233 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:59:11.966949 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:59:11.970940 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:59:11.980910 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:59:11.982766 kernel: BTRFS info (device vda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:59:11.989448 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:59:11.999116 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:59:12.072804 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:59:12.093107 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:59:12.099690 ignition[664]: Ignition 2.19.0 Apr 30 00:59:12.099701 ignition[664]: Stage: fetch-offline Apr 30 00:59:12.099741 ignition[664]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:59:12.099749 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:59:12.099960 ignition[664]: parsed url from cmdline: "" Apr 30 00:59:12.099969 ignition[664]: no config URL provided Apr 30 00:59:12.099975 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:59:12.099984 ignition[664]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:59:12.100013 ignition[664]: op(1): [started] loading QEMU firmware config module Apr 30 00:59:12.100017 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 00:59:12.108156 ignition[664]: op(1): [finished] loading QEMU firmware config module Apr 30 00:59:12.113023 systemd-networkd[764]: lo: Link UP Apr 30 00:59:12.113032 systemd-networkd[764]: lo: Gained carrier Apr 30 00:59:12.114306 ignition[664]: parsing config with SHA512: 14b4dd11c064c860c108d05ab0abdfc0bf447fc49a6e55070f077a62d07bd4e90f7d49a85cb7e4be823ae7f34d79506791d1e9fdbfeed4d56616520b124cc67f Apr 30 00:59:12.113694 systemd-networkd[764]: Enumeration completed Apr 30 00:59:12.114108 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:59:12.114111 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:59:12.117613 ignition[664]: fetch-offline: fetch-offline passed Apr 30 00:59:12.114998 systemd-networkd[764]: eth0: Link UP Apr 30 00:59:12.117676 ignition[664]: Ignition finished successfully Apr 30 00:59:12.115001 systemd-networkd[764]: eth0: Gained carrier Apr 30 00:59:12.115008 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:59:12.116563 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:59:12.117345 unknown[664]: fetched base config from "system" Apr 30 00:59:12.117351 unknown[664]: fetched user config from "qemu" Apr 30 00:59:12.118434 systemd[1]: Reached target network.target - Network. Apr 30 00:59:12.119551 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:59:12.124168 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 00:59:12.135186 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:59:12.138997 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:59:12.148160 ignition[769]: Ignition 2.19.0 Apr 30 00:59:12.148171 ignition[769]: Stage: kargs Apr 30 00:59:12.148335 ignition[769]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:59:12.148345 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:59:12.149094 ignition[769]: kargs: kargs passed Apr 30 00:59:12.153365 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:59:12.149141 ignition[769]: Ignition finished successfully Apr 30 00:59:12.159105 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:59:12.171491 ignition[778]: Ignition 2.19.0 Apr 30 00:59:12.171502 ignition[778]: Stage: disks Apr 30 00:59:12.171667 ignition[778]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:59:12.171676 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:59:12.174224 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:59:12.172421 ignition[778]: disks: disks passed Apr 30 00:59:12.172465 ignition[778]: Ignition finished successfully Apr 30 00:59:12.177452 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:59:12.179040 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:59:12.181109 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:59:12.183396 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:59:12.185908 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:59:12.194058 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:59:12.204713 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:59:12.208655 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:59:12.216053 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:59:12.265715 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:59:12.267259 kernel: EXT4-fs (vda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:59:12.266991 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:59:12.285041 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:59:12.286910 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:59:12.288528 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:59:12.288570 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:59:12.295181 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (796) Apr 30 00:59:12.295205 kernel: BTRFS info (device vda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:59:12.288593 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:59:12.299756 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:59:12.299774 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:59:12.293200 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:59:12.300203 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:59:12.304938 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:59:12.306110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:59:12.342928 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:59:12.346120 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:59:12.349355 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:59:12.353531 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:59:12.428182 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:59:12.445009 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:59:12.448050 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:59:12.451958 kernel: BTRFS info (device vda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:59:12.467118 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:59:12.472527 ignition[909]: INFO : Ignition 2.19.0 Apr 30 00:59:12.472527 ignition[909]: INFO : Stage: mount Apr 30 00:59:12.474138 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:59:12.474138 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:59:12.474138 ignition[909]: INFO : mount: mount passed Apr 30 00:59:12.474138 ignition[909]: INFO : Ignition finished successfully Apr 30 00:59:12.474951 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:59:12.495083 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:59:12.932422 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:59:12.941125 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:59:12.946954 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Apr 30 00:59:12.949222 kernel: BTRFS info (device vda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:59:12.949257 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:59:12.949268 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:59:12.952942 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:59:12.953837 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:59:12.971120 ignition[940]: INFO : Ignition 2.19.0 Apr 30 00:59:12.971120 ignition[940]: INFO : Stage: files Apr 30 00:59:12.973116 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:59:12.973116 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:59:12.973116 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:59:12.973116 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:59:12.973116 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:59:12.980489 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:59:12.980489 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:59:12.980489 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:59:12.980489 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:59:12.980489 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:59:12.980489 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:59:12.980489 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:59:12.980489 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:59:12.980489 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:59:12.980489 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:59:12.980489 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 00:59:12.977436 unknown[940]: wrote ssh authorized keys file for user: core Apr 30 00:59:13.310956 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Apr 30 00:59:13.473392 systemd-networkd[764]: eth0: Gained IPv6LL Apr 30 00:59:13.708677 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:59:13.708677 ignition[940]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Apr 30 00:59:13.712539 ignition[940]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:59:13.712539 ignition[940]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:59:13.712539 ignition[940]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Apr 30 00:59:13.712539 ignition[940]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 00:59:13.733963 ignition[940]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:59:13.737740 ignition[940]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:59:13.739511 ignition[940]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 00:59:13.739511 ignition[940]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:59:13.739511 ignition[940]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:59:13.739511 ignition[940]: INFO : files: files passed Apr 30 00:59:13.739511 ignition[940]: INFO : Ignition finished successfully Apr 30 00:59:13.739794 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:59:13.749073 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:59:13.751645 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:59:13.754709 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:59:13.754811 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:59:13.758577 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 00:59:13.761169 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:59:13.761169 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:59:13.764234 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:59:13.763991 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:59:13.765537 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:59:13.777110 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:59:13.797323 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:59:13.797460 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:59:13.799761 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:59:13.801777 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:59:13.803764 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:59:13.804686 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:59:13.819467 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:59:13.835165 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:59:13.843262 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:59:13.844715 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:59:13.846802 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:59:13.848660 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:59:13.848791 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:59:13.851751 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:59:13.853656 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:59:13.856562 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:59:13.858507 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:59:13.861252 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:59:13.863335 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:59:13.865654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:59:13.867756 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:59:13.870225 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:59:13.872243 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:59:13.873809 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:59:13.873968 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:59:13.876516 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:59:13.878607 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:59:13.880896 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:59:13.881998 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:59:13.883275 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:59:13.883397 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:59:13.886658 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:59:13.886834 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:59:13.889254 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:59:13.890971 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:59:13.896953 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:59:13.898412 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:59:13.900764 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:59:13.902467 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:59:13.902595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:59:13.904266 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:59:13.904388 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:59:13.906087 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:59:13.906243 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:59:13.908179 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:59:13.908321 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:59:13.926191 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:59:13.928987 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:59:13.929912 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:59:13.930163 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:59:13.932272 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:59:13.932424 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:59:13.940445 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:59:13.943073 ignition[997]: INFO : Ignition 2.19.0 Apr 30 00:59:13.943073 ignition[997]: INFO : Stage: umount Apr 30 00:59:13.943073 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:59:13.943073 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:59:13.941958 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:59:13.952189 ignition[997]: INFO : umount: umount passed Apr 30 00:59:13.952189 ignition[997]: INFO : Ignition finished successfully Apr 30 00:59:13.945243 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:59:13.948758 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:59:13.948972 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:59:13.951165 systemd[1]: Stopped target network.target - Network. Apr 30 00:59:13.953121 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:59:13.953186 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:59:13.954940 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:59:13.954989 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:59:13.957030 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:59:13.957082 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:59:13.959030 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:59:13.959084 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:59:13.961075 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:59:13.962874 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:59:13.966981 systemd-networkd[764]: eth0: DHCPv6 lease lost Apr 30 00:59:13.969129 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:59:13.969271 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:59:13.971773 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:59:13.971808 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:59:13.985062 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:59:13.985963 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:59:13.986032 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:59:13.988537 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:59:13.992716 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:59:13.992828 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:59:13.996255 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:59:13.996348 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:59:13.999762 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:59:13.999810 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:59:14.001638 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:59:14.001683 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:59:14.003466 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:59:14.003512 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:59:14.005408 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:59:14.005451 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:59:14.007744 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:59:14.007855 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:59:14.009751 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:59:14.009831 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:59:14.012026 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:59:14.012098 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:59:14.013338 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:59:14.013371 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:59:14.015168 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:59:14.015214 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:59:14.018190 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:59:14.018239 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:59:14.021367 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:59:14.021419 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:59:14.037119 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:59:14.038258 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:59:14.038329 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:59:14.040647 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:59:14.040695 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:59:14.042863 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:59:14.042909 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:59:14.045310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:59:14.045363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:59:14.047810 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:59:14.047895 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:59:14.050388 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:59:14.052808 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:59:14.064020 systemd[1]: Switching root. Apr 30 00:59:14.092342 systemd-journald[239]: Journal stopped Apr 30 00:59:14.837047 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Apr 30 00:59:14.837109 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:59:14.837123 kernel: SELinux: policy capability open_perms=1 Apr 30 00:59:14.837134 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:59:14.837144 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:59:14.837157 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:59:14.837167 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:59:14.837176 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:59:14.837192 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:59:14.837206 kernel: audit: type=1403 audit(1745974754.246:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:59:14.837217 systemd[1]: Successfully loaded SELinux policy in 32.028ms. Apr 30 00:59:14.837231 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.525ms. Apr 30 00:59:14.837243 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:59:14.837254 systemd[1]: Detected virtualization kvm. Apr 30 00:59:14.837265 systemd[1]: Detected architecture arm64. Apr 30 00:59:14.837275 systemd[1]: Detected first boot. Apr 30 00:59:14.837285 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:59:14.837299 zram_generator::config[1042]: No configuration found. Apr 30 00:59:14.837310 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:59:14.837321 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:59:14.837332 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:59:14.837344 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:59:14.837357 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:59:14.837368 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:59:14.837379 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:59:14.837390 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:59:14.837400 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:59:14.837411 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:59:14.837422 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:59:14.837432 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:59:14.837445 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:59:14.837456 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:59:14.837467 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:59:14.837478 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:59:14.837488 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:59:14.837500 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:59:14.837510 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:59:14.837521 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:59:14.837531 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:59:14.837543 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:59:14.837554 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:59:14.837565 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:59:14.837577 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:59:14.837588 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:59:14.837599 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:59:14.837609 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:59:14.837620 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:59:14.837633 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:59:14.837643 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:59:14.837654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:59:14.837664 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:59:14.837674 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:59:14.837685 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:59:14.837695 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:59:14.837705 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:59:14.837716 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:59:14.837728 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:59:14.837738 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:59:14.837750 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:59:14.837760 systemd[1]: Reached target machines.target - Containers. Apr 30 00:59:14.837773 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:59:14.837784 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:59:14.837795 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:59:14.837806 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:59:14.837818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:59:14.837829 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:59:14.837840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:59:14.837850 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:59:14.837860 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:59:14.837870 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:59:14.837881 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:59:14.837891 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:59:14.837901 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:59:14.837912 kernel: fuse: init (API version 7.39) Apr 30 00:59:14.837954 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:59:14.837967 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:59:14.837977 kernel: ACPI: bus type drm_connector registered Apr 30 00:59:14.837987 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:59:14.837997 kernel: loop: module loaded Apr 30 00:59:14.838007 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:59:14.838017 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:59:14.838027 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:59:14.838040 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:59:14.838050 systemd[1]: Stopped verity-setup.service. Apr 30 00:59:14.838060 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:59:14.838077 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:59:14.838106 systemd-journald[1109]: Collecting audit messages is disabled. Apr 30 00:59:14.838126 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:59:14.838137 systemd-journald[1109]: Journal started Apr 30 00:59:14.838160 systemd-journald[1109]: Runtime Journal (/run/log/journal/ee8dabcac86d4c0498885f59f7e7bc78) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:59:14.621025 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:59:14.640896 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 00:59:14.641281 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:59:14.840131 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:59:14.842156 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:59:14.842789 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:59:14.844083 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:59:14.845353 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:59:14.846830 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:59:14.848408 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:59:14.848555 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:59:14.850060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:59:14.850215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:59:14.851684 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:59:14.851832 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:59:14.853213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:59:14.853343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:59:14.855039 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:59:14.855180 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:59:14.856559 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:59:14.856707 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:59:14.858165 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:59:14.859734 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:59:14.861330 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:59:14.873679 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:59:14.884042 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:59:14.886443 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:59:14.887624 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:59:14.887667 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:59:14.890078 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:59:14.892508 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:59:14.894838 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:59:14.896125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:59:14.897839 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:59:14.903147 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:59:14.904566 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:59:14.905731 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:59:14.907205 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:59:14.911182 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:59:14.912300 systemd-journald[1109]: Time spent on flushing to /var/log/journal/ee8dabcac86d4c0498885f59f7e7bc78 is 25.167ms for 838 entries. Apr 30 00:59:14.912300 systemd-journald[1109]: System Journal (/var/log/journal/ee8dabcac86d4c0498885f59f7e7bc78) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:59:14.950503 systemd-journald[1109]: Received client request to flush runtime journal. Apr 30 00:59:14.950547 kernel: loop0: detected capacity change from 0 to 114432 Apr 30 00:59:14.916139 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:59:14.922185 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:59:14.925408 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:59:14.926907 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:59:14.928367 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:59:14.930061 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:59:14.932087 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:59:14.937679 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:59:14.952256 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:59:14.955998 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:59:14.957232 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:59:14.959192 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:59:14.962441 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:59:14.971836 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Apr 30 00:59:14.971851 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Apr 30 00:59:14.974359 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:59:14.979087 kernel: loop1: detected capacity change from 0 to 194096 Apr 30 00:59:14.979477 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:59:14.983541 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:59:14.985227 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:59:14.994170 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:59:15.015074 kernel: loop2: detected capacity change from 0 to 114328 Apr 30 00:59:15.021980 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:59:15.036121 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:59:15.044989 kernel: loop3: detected capacity change from 0 to 114432 Apr 30 00:59:15.050172 kernel: loop4: detected capacity change from 0 to 194096 Apr 30 00:59:15.050031 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Apr 30 00:59:15.050043 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Apr 30 00:59:15.056250 kernel: loop5: detected capacity change from 0 to 114328 Apr 30 00:59:15.056710 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:59:15.061573 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 00:59:15.065392 (sd-merge)[1179]: Merged extensions into '/usr'. Apr 30 00:59:15.068935 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:59:15.068950 systemd[1]: Reloading... Apr 30 00:59:15.138992 zram_generator::config[1209]: No configuration found. Apr 30 00:59:15.200573 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:59:15.230566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:59:15.270534 systemd[1]: Reloading finished in 201 ms. Apr 30 00:59:15.300986 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:59:15.302404 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:59:15.321142 systemd[1]: Starting ensure-sysext.service... Apr 30 00:59:15.323330 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:59:15.334316 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:59:15.334333 systemd[1]: Reloading... Apr 30 00:59:15.351475 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:59:15.351732 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:59:15.352406 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:59:15.352618 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Apr 30 00:59:15.352675 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Apr 30 00:59:15.355051 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:59:15.355069 systemd-tmpfiles[1241]: Skipping /boot Apr 30 00:59:15.362222 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:59:15.362236 systemd-tmpfiles[1241]: Skipping /boot Apr 30 00:59:15.389716 zram_generator::config[1268]: No configuration found. Apr 30 00:59:15.476285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:59:15.514467 systemd[1]: Reloading finished in 179 ms. Apr 30 00:59:15.529017 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:59:15.538454 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:59:15.546780 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:59:15.549597 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:59:15.552188 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:59:15.556226 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:59:15.564438 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:59:15.579376 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:59:15.582638 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:59:15.590835 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:59:15.601251 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:59:15.605391 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:59:15.609290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:59:15.610468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:59:15.611832 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:59:15.612552 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Apr 30 00:59:15.621233 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:59:15.626170 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:59:15.628284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:59:15.628442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:59:15.630166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:59:15.630478 augenrules[1331]: No rules Apr 30 00:59:15.630312 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:59:15.632346 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:59:15.634759 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:59:15.638898 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:59:15.639239 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:59:15.642345 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:59:15.654439 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:59:15.667974 systemd[1]: Finished ensure-sysext.service. Apr 30 00:59:15.669050 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:59:15.673497 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:59:15.689174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:59:15.694079 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:59:15.699682 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:59:15.705379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:59:15.708350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:59:15.717195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:59:15.725374 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1342) Apr 30 00:59:15.721127 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:59:15.723733 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:59:15.724328 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:59:15.725665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:59:15.731725 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:59:15.732264 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:59:15.734291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:59:15.734465 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:59:15.737281 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:59:15.737621 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:59:15.750771 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 00:59:15.761411 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:59:15.761485 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:59:15.774324 systemd-resolved[1309]: Positive Trust Anchors: Apr 30 00:59:15.774342 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:59:15.774375 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:59:15.787179 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:59:15.791982 systemd-resolved[1309]: Defaulting to hostname 'linux'. Apr 30 00:59:15.796173 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:59:15.799373 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:59:15.800719 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:59:15.815386 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:59:15.816940 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:59:15.818293 systemd-networkd[1373]: lo: Link UP Apr 30 00:59:15.818300 systemd-networkd[1373]: lo: Gained carrier Apr 30 00:59:15.819051 systemd-networkd[1373]: Enumeration completed Apr 30 00:59:15.819407 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:59:15.820262 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:59:15.820272 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:59:15.820963 systemd[1]: Reached target network.target - Network. Apr 30 00:59:15.822186 systemd-networkd[1373]: eth0: Link UP Apr 30 00:59:15.822194 systemd-networkd[1373]: eth0: Gained carrier Apr 30 00:59:15.822210 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:59:15.828178 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:59:15.829867 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:59:15.837606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:59:15.847039 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:59:15.848294 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Apr 30 00:59:15.849350 systemd-timesyncd[1374]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 00:59:15.849412 systemd-timesyncd[1374]: Initial clock synchronization to Wed 2025-04-30 00:59:16.171651 UTC. Apr 30 00:59:15.861136 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:59:15.868187 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:59:15.887833 lvm[1396]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:59:15.894310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:59:15.922597 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:59:15.925287 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:59:15.926597 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:59:15.927953 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:59:15.929292 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:59:15.930933 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:59:15.932191 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:59:15.933521 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:59:15.934829 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:59:15.934876 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:59:15.935840 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:59:15.937986 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:59:15.940568 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:59:15.947958 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:59:15.950441 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:59:15.952224 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:59:15.953484 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:59:15.954509 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:59:15.955518 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:59:15.955549 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:59:15.956649 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:59:15.958337 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:59:15.958903 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:59:15.964108 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:59:15.969175 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:59:15.973124 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:59:15.974345 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:59:15.979719 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:59:15.983246 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:59:15.984299 jq[1406]: false Apr 30 00:59:15.988386 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:59:15.993999 dbus-daemon[1405]: [system] SELinux support is enabled Apr 30 00:59:16.000170 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:59:16.000672 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:59:16.003225 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:59:16.007110 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:59:16.008904 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:59:16.010100 extend-filesystems[1407]: Found loop3 Apr 30 00:59:16.010100 extend-filesystems[1407]: Found loop4 Apr 30 00:59:16.010100 extend-filesystems[1407]: Found loop5 Apr 30 00:59:16.010100 extend-filesystems[1407]: Found vda Apr 30 00:59:16.010100 extend-filesystems[1407]: Found vda1 Apr 30 00:59:16.010100 extend-filesystems[1407]: Found vda2 Apr 30 00:59:16.010100 extend-filesystems[1407]: Found vda3 Apr 30 00:59:16.010100 extend-filesystems[1407]: Found usr Apr 30 00:59:16.010100 extend-filesystems[1407]: Found vda4 Apr 30 00:59:16.010100 extend-filesystems[1407]: Found vda6 Apr 30 00:59:16.010100 extend-filesystems[1407]: Found vda7 Apr 30 00:59:16.010100 extend-filesystems[1407]: Found vda9 Apr 30 00:59:16.010100 extend-filesystems[1407]: Checking size of /dev/vda9 Apr 30 00:59:16.014464 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:59:16.034764 jq[1420]: true Apr 30 00:59:16.018464 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:59:16.018640 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:59:16.018929 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:59:16.019086 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:59:16.023428 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:59:16.023586 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:59:16.050930 jq[1426]: true Apr 30 00:59:16.043664 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:59:16.043755 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:59:16.045888 (ntainerd)[1427]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:59:16.046238 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:59:16.046274 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:59:16.062076 extend-filesystems[1407]: Resized partition /dev/vda9 Apr 30 00:59:16.070468 extend-filesystems[1439]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:59:16.077444 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1349) Apr 30 00:59:16.077472 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 00:59:16.070393 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:59:16.077593 update_engine[1417]: I20250430 00:59:16.062386 1417 main.cc:92] Flatcar Update Engine starting Apr 30 00:59:16.077593 update_engine[1417]: I20250430 00:59:16.069283 1417 update_check_scheduler.cc:74] Next update check in 4m34s Apr 30 00:59:16.073685 systemd-logind[1413]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:59:16.083165 systemd-logind[1413]: New seat seat0. Apr 30 00:59:16.089332 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:59:16.108381 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 00:59:16.110285 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:59:16.131324 extend-filesystems[1439]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 00:59:16.131324 extend-filesystems[1439]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:59:16.131324 extend-filesystems[1439]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 00:59:16.126443 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:59:16.138063 extend-filesystems[1407]: Resized filesystem in /dev/vda9 Apr 30 00:59:16.126633 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:59:16.149256 bash[1455]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:59:16.151667 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:59:16.154349 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 00:59:16.164202 locksmithd[1442]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:59:16.271368 containerd[1427]: time="2025-04-30T00:59:16.271220559Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:59:16.296698 containerd[1427]: time="2025-04-30T00:59:16.296641182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:59:16.298947 containerd[1427]: time="2025-04-30T00:59:16.298135139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:59:16.298947 containerd[1427]: time="2025-04-30T00:59:16.298359179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:59:16.298947 containerd[1427]: time="2025-04-30T00:59:16.298387059Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:59:16.298947 containerd[1427]: time="2025-04-30T00:59:16.298607312Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:59:16.298947 containerd[1427]: time="2025-04-30T00:59:16.298651338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:59:16.298947 containerd[1427]: time="2025-04-30T00:59:16.298721745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:59:16.298947 containerd[1427]: time="2025-04-30T00:59:16.298748543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:59:16.299156 containerd[1427]: time="2025-04-30T00:59:16.298947033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:59:16.299156 containerd[1427]: time="2025-04-30T00:59:16.299000421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:59:16.299156 containerd[1427]: time="2025-04-30T00:59:16.299021394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:59:16.299156 containerd[1427]: time="2025-04-30T00:59:16.299035917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:59:16.299156 containerd[1427]: time="2025-04-30T00:59:16.299124509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:59:16.299376 containerd[1427]: time="2025-04-30T00:59:16.299333193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:59:16.299496 containerd[1427]: time="2025-04-30T00:59:16.299476214Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:59:16.299524 containerd[1427]: time="2025-04-30T00:59:16.299499642Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:59:16.299635 containerd[1427]: time="2025-04-30T00:59:16.299608457Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:59:16.299697 containerd[1427]: time="2025-04-30T00:59:16.299678324Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:59:16.303261 containerd[1427]: time="2025-04-30T00:59:16.303221511Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:59:16.303318 containerd[1427]: time="2025-04-30T00:59:16.303280725Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:59:16.303318 containerd[1427]: time="2025-04-30T00:59:16.303298868Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:59:16.303318 containerd[1427]: time="2025-04-30T00:59:16.303314556Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:59:16.303371 containerd[1427]: time="2025-04-30T00:59:16.303329162Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:59:16.303509 containerd[1427]: time="2025-04-30T00:59:16.303474846Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:59:16.303730 containerd[1427]: time="2025-04-30T00:59:16.303710328Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:59:16.303842 containerd[1427]: time="2025-04-30T00:59:16.303824928Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:59:16.303867 containerd[1427]: time="2025-04-30T00:59:16.303845110Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:59:16.303867 containerd[1427]: time="2025-04-30T00:59:16.303860506Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:59:16.303911 containerd[1427]: time="2025-04-30T00:59:16.303873947Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:59:16.303911 containerd[1427]: time="2025-04-30T00:59:16.303894961Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:59:16.303911 containerd[1427]: time="2025-04-30T00:59:16.303908360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:59:16.303971 containerd[1427]: time="2025-04-30T00:59:16.303922508Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:59:16.303971 containerd[1427]: time="2025-04-30T00:59:16.303937406Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:59:16.304006 containerd[1427]: time="2025-04-30T00:59:16.303972401Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:59:16.304006 containerd[1427]: time="2025-04-30T00:59:16.303985842Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:59:16.304006 containerd[1427]: time="2025-04-30T00:59:16.304000032Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:59:16.304062 containerd[1427]: time="2025-04-30T00:59:16.304025582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304062 containerd[1427]: time="2025-04-30T00:59:16.304045139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304062 containerd[1427]: time="2025-04-30T00:59:16.304058122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304118 containerd[1427]: time="2025-04-30T00:59:16.304071230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304118 containerd[1427]: time="2025-04-30T00:59:16.304084130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304118 containerd[1427]: time="2025-04-30T00:59:16.304097404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304118 containerd[1427]: time="2025-04-30T00:59:16.304109596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304186 containerd[1427]: time="2025-04-30T00:59:16.304122829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304186 containerd[1427]: time="2025-04-30T00:59:16.304136187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304186 containerd[1427]: time="2025-04-30T00:59:16.304151084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304186 containerd[1427]: time="2025-04-30T00:59:16.304162610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304186 containerd[1427]: time="2025-04-30T00:59:16.304174678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304271 containerd[1427]: time="2025-04-30T00:59:16.304187786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304271 containerd[1427]: time="2025-04-30T00:59:16.304211255Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:59:16.304271 containerd[1427]: time="2025-04-30T00:59:16.304231978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304271 containerd[1427]: time="2025-04-30T00:59:16.304245044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.304271 containerd[1427]: time="2025-04-30T00:59:16.304256237Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:59:16.305039 containerd[1427]: time="2025-04-30T00:59:16.304995809Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:59:16.305080 containerd[1427]: time="2025-04-30T00:59:16.305039960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:59:16.305080 containerd[1427]: time="2025-04-30T00:59:16.305052235Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:59:16.305080 containerd[1427]: time="2025-04-30T00:59:16.305066883Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:59:16.305080 containerd[1427]: time="2025-04-30T00:59:16.305076537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.305164 containerd[1427]: time="2025-04-30T00:59:16.305089353Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:59:16.305164 containerd[1427]: time="2025-04-30T00:59:16.305110118Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:59:16.305164 containerd[1427]: time="2025-04-30T00:59:16.305121519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:59:16.305590 containerd[1427]: time="2025-04-30T00:59:16.305505349Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:59:16.305590 containerd[1427]: time="2025-04-30T00:59:16.305581999Z" level=info msg="Connect containerd service" Apr 30 00:59:16.305733 containerd[1427]: time="2025-04-30T00:59:16.305612750Z" level=info msg="using legacy CRI server" Apr 30 00:59:16.305733 containerd[1427]: time="2025-04-30T00:59:16.305620157Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:59:16.305733 containerd[1427]: time="2025-04-30T00:59:16.305725519Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:59:16.306646 containerd[1427]: time="2025-04-30T00:59:16.306590635Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:59:16.306994 containerd[1427]: time="2025-04-30T00:59:16.306906096Z" level=info msg="Start subscribing containerd event" Apr 30 00:59:16.306994 containerd[1427]: time="2025-04-30T00:59:16.306975380Z" level=info msg="Start recovering state" Apr 30 00:59:16.307162 containerd[1427]: time="2025-04-30T00:59:16.307148903Z" level=info msg="Start event monitor" Apr 30 00:59:16.307223 containerd[1427]: time="2025-04-30T00:59:16.307211029Z" level=info msg="Start snapshots syncer" Apr 30 00:59:16.307442 containerd[1427]: time="2025-04-30T00:59:16.307265083Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:59:16.307442 containerd[1427]: time="2025-04-30T00:59:16.307279065Z" level=info msg="Start streaming server" Apr 30 00:59:16.307442 containerd[1427]: time="2025-04-30T00:59:16.307245526Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:59:16.307442 containerd[1427]: time="2025-04-30T00:59:16.307397868Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:59:16.308814 containerd[1427]: time="2025-04-30T00:59:16.307467859Z" level=info msg="containerd successfully booted in 0.037128s" Apr 30 00:59:16.307553 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:59:16.514604 sshd_keygen[1418]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:59:16.537310 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:59:16.547562 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:59:16.553666 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:59:16.555020 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:59:16.559483 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:59:16.576582 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:59:16.586582 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:59:16.589475 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:59:16.591268 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:59:16.930347 systemd-networkd[1373]: eth0: Gained IPv6LL Apr 30 00:59:16.932769 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:59:16.934669 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:59:16.945252 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 00:59:16.947734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:59:16.949947 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:59:16.966218 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 00:59:16.967061 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 00:59:16.968826 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:59:16.972789 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:59:17.485949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:59:17.487549 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:59:17.490000 (kubelet)[1512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:59:17.493138 systemd[1]: Startup finished in 651ms (kernel) + 4.535s (initrd) + 3.280s (userspace) = 8.467s. Apr 30 00:59:18.083809 kubelet[1512]: E0430 00:59:18.083738 1512 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:59:18.086816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:59:18.087008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:59:22.639834 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:59:22.641161 systemd[1]: Started sshd@0-10.0.0.153:22-10.0.0.1:47028.service - OpenSSH per-connection server daemon (10.0.0.1:47028). Apr 30 00:59:22.693633 sshd[1527]: Accepted publickey for core from 10.0.0.1 port 47028 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:59:22.695436 sshd[1527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:22.713761 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:59:22.725217 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:59:22.726834 systemd-logind[1413]: New session 1 of user core. Apr 30 00:59:22.736682 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:59:22.739086 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:59:22.746364 (systemd)[1531]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:59:22.822814 systemd[1531]: Queued start job for default target default.target. Apr 30 00:59:22.833974 systemd[1531]: Created slice app.slice - User Application Slice. Apr 30 00:59:22.834002 systemd[1531]: Reached target paths.target - Paths. Apr 30 00:59:22.834014 systemd[1531]: Reached target timers.target - Timers. Apr 30 00:59:22.835353 systemd[1531]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:59:22.847429 systemd[1531]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:59:22.847547 systemd[1531]: Reached target sockets.target - Sockets. Apr 30 00:59:22.847565 systemd[1531]: Reached target basic.target - Basic System. Apr 30 00:59:22.847602 systemd[1531]: Reached target default.target - Main User Target. Apr 30 00:59:22.847638 systemd[1531]: Startup finished in 93ms. Apr 30 00:59:22.847879 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:59:22.850153 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:59:22.910127 systemd[1]: Started sshd@1-10.0.0.153:22-10.0.0.1:47032.service - OpenSSH per-connection server daemon (10.0.0.1:47032). Apr 30 00:59:22.951280 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 47032 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:59:22.952682 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:22.957506 systemd-logind[1413]: New session 2 of user core. Apr 30 00:59:22.969115 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:59:23.023312 sshd[1542]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:23.043445 systemd[1]: sshd@1-10.0.0.153:22-10.0.0.1:47032.service: Deactivated successfully. Apr 30 00:59:23.045584 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:59:23.049229 systemd-logind[1413]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:59:23.051251 systemd[1]: Started sshd@2-10.0.0.153:22-10.0.0.1:47036.service - OpenSSH per-connection server daemon (10.0.0.1:47036). Apr 30 00:59:23.052086 systemd-logind[1413]: Removed session 2. Apr 30 00:59:23.092223 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 47036 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:59:23.094170 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:23.099381 systemd-logind[1413]: New session 3 of user core. Apr 30 00:59:23.113172 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:59:23.165601 sshd[1549]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:23.178699 systemd[1]: sshd@2-10.0.0.153:22-10.0.0.1:47036.service: Deactivated successfully. Apr 30 00:59:23.180713 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:59:23.182296 systemd-logind[1413]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:59:23.196258 systemd[1]: Started sshd@3-10.0.0.153:22-10.0.0.1:47050.service - OpenSSH per-connection server daemon (10.0.0.1:47050). Apr 30 00:59:23.197514 systemd-logind[1413]: Removed session 3. Apr 30 00:59:23.230023 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 47050 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:59:23.231921 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:23.235724 systemd-logind[1413]: New session 4 of user core. Apr 30 00:59:23.242090 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:59:23.297830 sshd[1556]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:23.310496 systemd[1]: sshd@3-10.0.0.153:22-10.0.0.1:47050.service: Deactivated successfully. Apr 30 00:59:23.314065 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:59:23.316167 systemd-logind[1413]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:59:23.318154 systemd[1]: Started sshd@4-10.0.0.153:22-10.0.0.1:47064.service - OpenSSH per-connection server daemon (10.0.0.1:47064). Apr 30 00:59:23.318898 systemd-logind[1413]: Removed session 4. Apr 30 00:59:23.358616 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 47064 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:59:23.360043 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:23.364065 systemd-logind[1413]: New session 5 of user core. Apr 30 00:59:23.375142 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:59:23.436203 sudo[1566]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:59:23.436518 sudo[1566]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:59:23.448964 sudo[1566]: pam_unix(sudo:session): session closed for user root Apr 30 00:59:23.450940 sshd[1563]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:23.460714 systemd[1]: sshd@4-10.0.0.153:22-10.0.0.1:47064.service: Deactivated successfully. Apr 30 00:59:23.464033 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:59:23.465420 systemd-logind[1413]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:59:23.477252 systemd[1]: Started sshd@5-10.0.0.153:22-10.0.0.1:47070.service - OpenSSH per-connection server daemon (10.0.0.1:47070). Apr 30 00:59:23.478205 systemd-logind[1413]: Removed session 5. Apr 30 00:59:23.510630 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 47070 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:59:23.512042 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:23.515742 systemd-logind[1413]: New session 6 of user core. Apr 30 00:59:23.527141 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:59:23.579316 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:59:23.579600 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:59:23.582648 sudo[1575]: pam_unix(sudo:session): session closed for user root Apr 30 00:59:23.587431 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:59:23.587833 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:59:23.606310 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:59:23.607755 auditctl[1578]: No rules Apr 30 00:59:23.608639 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:59:23.608849 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:59:23.610715 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:59:23.636329 augenrules[1596]: No rules Apr 30 00:59:23.637698 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:59:23.639093 sudo[1574]: pam_unix(sudo:session): session closed for user root Apr 30 00:59:23.640774 sshd[1571]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:23.654479 systemd[1]: sshd@5-10.0.0.153:22-10.0.0.1:47070.service: Deactivated successfully. Apr 30 00:59:23.656061 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:59:23.656692 systemd-logind[1413]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:59:23.667367 systemd[1]: Started sshd@6-10.0.0.153:22-10.0.0.1:47076.service - OpenSSH per-connection server daemon (10.0.0.1:47076). Apr 30 00:59:23.669563 systemd-logind[1413]: Removed session 6. Apr 30 00:59:23.699625 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 47076 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:59:23.701173 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:59:23.704779 systemd-logind[1413]: New session 7 of user core. Apr 30 00:59:23.716097 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:59:23.767890 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:59:23.768188 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:59:23.790379 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 00:59:23.806879 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 00:59:23.807126 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 00:59:24.403212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:59:24.417272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:59:24.435900 systemd[1]: Reloading requested from client PID 1658 ('systemctl') (unit session-7.scope)... Apr 30 00:59:24.435926 systemd[1]: Reloading... Apr 30 00:59:24.528976 zram_generator::config[1699]: No configuration found. Apr 30 00:59:24.711302 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:59:24.770129 systemd[1]: Reloading finished in 333 ms. Apr 30 00:59:24.814751 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:59:24.814822 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:59:24.815075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:59:24.818038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:59:24.922873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:59:24.928248 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:59:24.971638 kubelet[1742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:59:24.971638 kubelet[1742]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:59:24.971638 kubelet[1742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:59:24.971989 kubelet[1742]: I0430 00:59:24.971711 1742 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:59:26.134216 kubelet[1742]: I0430 00:59:26.134160 1742 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:59:26.134216 kubelet[1742]: I0430 00:59:26.134197 1742 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:59:26.134658 kubelet[1742]: I0430 00:59:26.134410 1742 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:59:26.171358 kubelet[1742]: I0430 00:59:26.171300 1742 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:59:26.182342 kubelet[1742]: I0430 00:59:26.182304 1742 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:59:26.183440 kubelet[1742]: I0430 00:59:26.183382 1742 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:59:26.183625 kubelet[1742]: I0430 00:59:26.183435 1742 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.153","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:59:26.183710 kubelet[1742]: I0430 00:59:26.183693 1742 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:59:26.183710 kubelet[1742]: I0430 00:59:26.183702 1742 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:59:26.184009 kubelet[1742]: I0430 00:59:26.183987 1742 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:59:26.184836 kubelet[1742]: I0430 00:59:26.184806 1742 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:59:26.184836 kubelet[1742]: I0430 00:59:26.184828 1742 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:59:26.185365 kubelet[1742]: I0430 00:59:26.185024 1742 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:59:26.185365 kubelet[1742]: E0430 00:59:26.185078 1742 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:26.185365 kubelet[1742]: I0430 00:59:26.185150 1742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:59:26.185365 kubelet[1742]: E0430 00:59:26.185325 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:26.186489 kubelet[1742]: I0430 00:59:26.186470 1742 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:59:26.186968 kubelet[1742]: I0430 00:59:26.186950 1742 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:59:26.187135 kubelet[1742]: W0430 00:59:26.187123 1742 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:59:26.188076 kubelet[1742]: I0430 00:59:26.188054 1742 server.go:1264] "Started kubelet" Apr 30 00:59:26.188723 kubelet[1742]: I0430 00:59:26.188666 1742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:59:26.188921 kubelet[1742]: I0430 00:59:26.188883 1742 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:59:26.188976 kubelet[1742]: I0430 00:59:26.188927 1742 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:59:26.189819 kubelet[1742]: I0430 00:59:26.189793 1742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:59:26.190047 kubelet[1742]: I0430 00:59:26.189961 1742 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:59:26.196371 kubelet[1742]: I0430 00:59:26.193670 1742 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:59:26.196371 kubelet[1742]: I0430 00:59:26.193820 1742 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:59:26.196371 kubelet[1742]: I0430 00:59:26.195192 1742 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:59:26.197362 kubelet[1742]: E0430 00:59:26.197331 1742 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:59:26.198044 kubelet[1742]: E0430 00:59:26.197827 1742 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.153.183af2cc9d6edaa2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.153,UID:10.0.0.153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.153,},FirstTimestamp:2025-04-30 00:59:26.18802653 +0000 UTC m=+1.256299492,LastTimestamp:2025-04-30 00:59:26.18802653 +0000 UTC m=+1.256299492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.153,}" Apr 30 00:59:26.198227 kubelet[1742]: W0430 00:59:26.198197 1742 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.153" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Apr 30 00:59:26.198271 kubelet[1742]: E0430 00:59:26.198234 1742 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.153" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Apr 30 00:59:26.198316 kubelet[1742]: W0430 00:59:26.198283 1742 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Apr 30 00:59:26.198316 kubelet[1742]: E0430 00:59:26.198294 1742 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Apr 30 00:59:26.198534 kubelet[1742]: I0430 00:59:26.198507 1742 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:59:26.198628 kubelet[1742]: I0430 00:59:26.198607 1742 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:59:26.200519 kubelet[1742]: I0430 00:59:26.200452 1742 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:59:26.200519 kubelet[1742]: W0430 00:59:26.200509 1742 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Apr 30 00:59:26.200619 kubelet[1742]: E0430 00:59:26.200533 1742 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Apr 30 00:59:26.200619 kubelet[1742]: E0430 00:59:26.200574 1742 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.153\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 30 00:59:26.200959 kubelet[1742]: E0430 00:59:26.200835 1742 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.153.183af2cc9dfca2bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.153,UID:10.0.0.153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.153,},FirstTimestamp:2025-04-30 00:59:26.197318333 +0000 UTC m=+1.265591296,LastTimestamp:2025-04-30 00:59:26.197318333 +0000 UTC m=+1.265591296,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.153,}" Apr 30 00:59:26.207547 kubelet[1742]: I0430 00:59:26.207463 1742 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:59:26.207547 kubelet[1742]: I0430 00:59:26.207482 1742 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:59:26.207547 kubelet[1742]: I0430 00:59:26.207502 1742 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:59:26.208612 kubelet[1742]: E0430 00:59:26.208525 1742 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.153.183af2cc9e8bfdfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.153,UID:10.0.0.153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.153 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.153,},FirstTimestamp:2025-04-30 00:59:26.20671334 +0000 UTC m=+1.274986302,LastTimestamp:2025-04-30 00:59:26.20671334 +0000 UTC m=+1.274986302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.153,}" Apr 30 00:59:26.216392 kubelet[1742]: E0430 00:59:26.216095 1742 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.153.183af2cc9e8c2917 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.153,UID:10.0.0.153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.153 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.153,},FirstTimestamp:2025-04-30 00:59:26.206724375 +0000 UTC m=+1.274997297,LastTimestamp:2025-04-30 00:59:26.206724375 +0000 UTC m=+1.274997297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.153,}" Apr 30 00:59:26.224317 kubelet[1742]: E0430 00:59:26.224179 1742 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.153.183af2cc9e8c4233 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.153,UID:10.0.0.153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.153 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.153,},FirstTimestamp:2025-04-30 00:59:26.206730803 +0000 UTC m=+1.275003765,LastTimestamp:2025-04-30 00:59:26.206730803 +0000 UTC m=+1.275003765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.153,}" Apr 30 00:59:26.294687 kubelet[1742]: I0430 00:59:26.294396 1742 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.153" Apr 30 00:59:26.303242 kubelet[1742]: E0430 00:59:26.303210 1742 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.153" Apr 30 00:59:26.303971 kubelet[1742]: E0430 00:59:26.303693 1742 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.153.183af2cc9e8bfdfc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.153.183af2cc9e8bfdfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.153,UID:10.0.0.153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.153 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.153,},FirstTimestamp:2025-04-30 00:59:26.20671334 +0000 UTC m=+1.274986302,LastTimestamp:2025-04-30 00:59:26.29434548 +0000 UTC m=+1.362618442,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.153,}" Apr 30 00:59:26.311772 kubelet[1742]: E0430 00:59:26.311549 1742 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.153.183af2cc9e8c2917\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.153.183af2cc9e8c2917 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.153,UID:10.0.0.153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.153 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.153,},FirstTimestamp:2025-04-30 00:59:26.206724375 +0000 UTC m=+1.274997297,LastTimestamp:2025-04-30 00:59:26.294360679 +0000 UTC m=+1.362633641,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.153,}" Apr 30 00:59:26.319220 kubelet[1742]: E0430 00:59:26.319067 1742 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.153.183af2cc9e8c4233\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.153.183af2cc9e8c4233 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.153,UID:10.0.0.153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.153 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.153,},FirstTimestamp:2025-04-30 00:59:26.206730803 +0000 UTC m=+1.275003765,LastTimestamp:2025-04-30 00:59:26.294364398 +0000 UTC m=+1.362637360,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.153,}" Apr 30 00:59:26.359861 kubelet[1742]: I0430 00:59:26.359814 1742 policy_none.go:49] "None policy: Start" Apr 30 00:59:26.361124 kubelet[1742]: I0430 00:59:26.361094 1742 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:59:26.361124 kubelet[1742]: I0430 00:59:26.361124 1742 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:59:26.370798 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:59:26.386804 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:59:26.388603 kubelet[1742]: I0430 00:59:26.388527 1742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:59:26.390202 kubelet[1742]: I0430 00:59:26.390181 1742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:59:26.390484 kubelet[1742]: I0430 00:59:26.390410 1742 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:59:26.390484 kubelet[1742]: I0430 00:59:26.390431 1742 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:59:26.390729 kubelet[1742]: E0430 00:59:26.390473 1742 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:59:26.390966 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:59:26.401010 kubelet[1742]: I0430 00:59:26.400891 1742 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:59:26.401516 kubelet[1742]: I0430 00:59:26.401205 1742 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:59:26.401516 kubelet[1742]: I0430 00:59:26.401354 1742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:59:26.402776 kubelet[1742]: E0430 00:59:26.402744 1742 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.153\" not found" Apr 30 00:59:26.404617 kubelet[1742]: E0430 00:59:26.404596 1742 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.153\" not found" node="10.0.0.153" Apr 30 00:59:26.504730 kubelet[1742]: I0430 00:59:26.504684 1742 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.153" Apr 30 00:59:26.518810 kubelet[1742]: I0430 00:59:26.518762 1742 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.153" Apr 30 00:59:26.530538 kubelet[1742]: E0430 00:59:26.530485 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:26.633441 kubelet[1742]: E0430 00:59:26.633327 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:26.733689 kubelet[1742]: E0430 00:59:26.733549 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:26.834530 kubelet[1742]: E0430 00:59:26.834466 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:26.934667 kubelet[1742]: E0430 00:59:26.934614 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:26.954249 sudo[1608]: pam_unix(sudo:session): session closed for user root Apr 30 00:59:26.956130 sshd[1604]: pam_unix(sshd:session): session closed for user core Apr 30 00:59:26.959604 systemd[1]: sshd@6-10.0.0.153:22-10.0.0.1:47076.service: Deactivated successfully. Apr 30 00:59:26.961236 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:59:26.961970 systemd-logind[1413]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:59:26.962899 systemd-logind[1413]: Removed session 7. Apr 30 00:59:27.035834 kubelet[1742]: E0430 00:59:27.035711 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:27.136335 kubelet[1742]: I0430 00:59:27.136129 1742 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 30 00:59:27.136335 kubelet[1742]: E0430 00:59:27.136266 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:27.136335 kubelet[1742]: W0430 00:59:27.136334 1742 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Apr 30 00:59:27.186137 kubelet[1742]: E0430 00:59:27.186075 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:27.237191 kubelet[1742]: E0430 00:59:27.237149 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:27.337711 kubelet[1742]: E0430 00:59:27.337593 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:27.438526 kubelet[1742]: E0430 00:59:27.438485 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:27.539367 kubelet[1742]: E0430 00:59:27.539306 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:27.640490 kubelet[1742]: E0430 00:59:27.640442 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.153\" not found" Apr 30 00:59:27.741380 kubelet[1742]: I0430 00:59:27.741350 1742 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Apr 30 00:59:27.741933 containerd[1427]: time="2025-04-30T00:59:27.741869014Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:59:27.743730 kubelet[1742]: I0430 00:59:27.743688 1742 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Apr 30 00:59:28.186818 kubelet[1742]: E0430 00:59:28.186769 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:28.186818 kubelet[1742]: I0430 00:59:28.186790 1742 apiserver.go:52] "Watching apiserver" Apr 30 00:59:28.190850 kubelet[1742]: I0430 00:59:28.190794 1742 topology_manager.go:215] "Topology Admit Handler" podUID="db3a1930-06aa-4a95-a8e8-35314323f3d1" podNamespace="calico-system" podName="calico-node-9lmr2" Apr 30 00:59:28.190954 kubelet[1742]: I0430 00:59:28.190902 1742 topology_manager.go:215] "Topology Admit Handler" podUID="2e006cc2-2ba8-413a-9a6d-b9e91401ca76" podNamespace="calico-system" podName="csi-node-driver-4nn59" Apr 30 00:59:28.191036 kubelet[1742]: I0430 00:59:28.191003 1742 topology_manager.go:215] "Topology Admit Handler" podUID="db83bb6f-9c71-4d34-8e02-95ccf64fbc13" podNamespace="kube-system" podName="kube-proxy-whd9b" Apr 30 00:59:28.191710 kubelet[1742]: E0430 00:59:28.191143 1742 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4nn59" podUID="2e006cc2-2ba8-413a-9a6d-b9e91401ca76" Apr 30 00:59:28.194241 kubelet[1742]: I0430 00:59:28.194192 1742 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:59:28.197452 systemd[1]: Created slice kubepods-besteffort-poddb83bb6f_9c71_4d34_8e02_95ccf64fbc13.slice - libcontainer container kubepods-besteffort-poddb83bb6f_9c71_4d34_8e02_95ccf64fbc13.slice. Apr 30 00:59:28.206461 kubelet[1742]: I0430 00:59:28.206424 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e006cc2-2ba8-413a-9a6d-b9e91401ca76-kubelet-dir\") pod \"csi-node-driver-4nn59\" (UID: \"2e006cc2-2ba8-413a-9a6d-b9e91401ca76\") " pod="calico-system/csi-node-driver-4nn59" Apr 30 00:59:28.206461 kubelet[1742]: I0430 00:59:28.206464 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db3a1930-06aa-4a95-a8e8-35314323f3d1-lib-modules\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.206461 kubelet[1742]: I0430 00:59:28.206484 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db3a1930-06aa-4a95-a8e8-35314323f3d1-xtables-lock\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.206461 kubelet[1742]: I0430 00:59:28.206500 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db3a1930-06aa-4a95-a8e8-35314323f3d1-tigera-ca-bundle\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.206461 kubelet[1742]: I0430 00:59:28.206522 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/db3a1930-06aa-4a95-a8e8-35314323f3d1-node-certs\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.206809 kubelet[1742]: I0430 00:59:28.206539 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/db3a1930-06aa-4a95-a8e8-35314323f3d1-cni-bin-dir\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.206809 kubelet[1742]: I0430 00:59:28.206554 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcx8t\" (UniqueName: \"kubernetes.io/projected/db3a1930-06aa-4a95-a8e8-35314323f3d1-kube-api-access-wcx8t\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.206809 kubelet[1742]: I0430 00:59:28.206580 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2e006cc2-2ba8-413a-9a6d-b9e91401ca76-varrun\") pod \"csi-node-driver-4nn59\" (UID: \"2e006cc2-2ba8-413a-9a6d-b9e91401ca76\") " pod="calico-system/csi-node-driver-4nn59" Apr 30 00:59:28.206809 kubelet[1742]: I0430 00:59:28.206599 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmmrj\" (UniqueName: \"kubernetes.io/projected/2e006cc2-2ba8-413a-9a6d-b9e91401ca76-kube-api-access-mmmrj\") pod \"csi-node-driver-4nn59\" (UID: \"2e006cc2-2ba8-413a-9a6d-b9e91401ca76\") " pod="calico-system/csi-node-driver-4nn59" Apr 30 00:59:28.206809 kubelet[1742]: I0430 00:59:28.206614 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db83bb6f-9c71-4d34-8e02-95ccf64fbc13-lib-modules\") pod \"kube-proxy-whd9b\" (UID: \"db83bb6f-9c71-4d34-8e02-95ccf64fbc13\") " pod="kube-system/kube-proxy-whd9b" Apr 30 00:59:28.206911 kubelet[1742]: I0430 00:59:28.206628 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/db3a1930-06aa-4a95-a8e8-35314323f3d1-policysync\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.206911 kubelet[1742]: I0430 00:59:28.206657 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/db3a1930-06aa-4a95-a8e8-35314323f3d1-flexvol-driver-host\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.206911 kubelet[1742]: I0430 00:59:28.206672 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db3a1930-06aa-4a95-a8e8-35314323f3d1-var-lib-calico\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.206911 kubelet[1742]: I0430 00:59:28.206690 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/db3a1930-06aa-4a95-a8e8-35314323f3d1-cni-net-dir\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.206911 kubelet[1742]: I0430 00:59:28.206707 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e006cc2-2ba8-413a-9a6d-b9e91401ca76-socket-dir\") pod \"csi-node-driver-4nn59\" (UID: \"2e006cc2-2ba8-413a-9a6d-b9e91401ca76\") " pod="calico-system/csi-node-driver-4nn59" Apr 30 00:59:28.207037 kubelet[1742]: I0430 00:59:28.206720 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e006cc2-2ba8-413a-9a6d-b9e91401ca76-registration-dir\") pod \"csi-node-driver-4nn59\" (UID: \"2e006cc2-2ba8-413a-9a6d-b9e91401ca76\") " pod="calico-system/csi-node-driver-4nn59" Apr 30 00:59:28.207037 kubelet[1742]: I0430 00:59:28.206734 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db83bb6f-9c71-4d34-8e02-95ccf64fbc13-kube-proxy\") pod \"kube-proxy-whd9b\" (UID: \"db83bb6f-9c71-4d34-8e02-95ccf64fbc13\") " pod="kube-system/kube-proxy-whd9b" Apr 30 00:59:28.207037 kubelet[1742]: I0430 00:59:28.206747 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db83bb6f-9c71-4d34-8e02-95ccf64fbc13-xtables-lock\") pod \"kube-proxy-whd9b\" (UID: \"db83bb6f-9c71-4d34-8e02-95ccf64fbc13\") " pod="kube-system/kube-proxy-whd9b" Apr 30 00:59:28.207037 kubelet[1742]: I0430 00:59:28.206762 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpqgh\" (UniqueName: \"kubernetes.io/projected/db83bb6f-9c71-4d34-8e02-95ccf64fbc13-kube-api-access-kpqgh\") pod \"kube-proxy-whd9b\" (UID: \"db83bb6f-9c71-4d34-8e02-95ccf64fbc13\") " pod="kube-system/kube-proxy-whd9b" Apr 30 00:59:28.207037 kubelet[1742]: I0430 00:59:28.206776 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/db3a1930-06aa-4a95-a8e8-35314323f3d1-var-run-calico\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.207133 kubelet[1742]: I0430 00:59:28.206790 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/db3a1930-06aa-4a95-a8e8-35314323f3d1-cni-log-dir\") pod \"calico-node-9lmr2\" (UID: \"db3a1930-06aa-4a95-a8e8-35314323f3d1\") " pod="calico-system/calico-node-9lmr2" Apr 30 00:59:28.212451 systemd[1]: Created slice kubepods-besteffort-poddb3a1930_06aa_4a95_a8e8_35314323f3d1.slice - libcontainer container kubepods-besteffort-poddb3a1930_06aa_4a95_a8e8_35314323f3d1.slice. Apr 30 00:59:28.308250 kubelet[1742]: E0430 00:59:28.308221 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.308250 kubelet[1742]: W0430 00:59:28.308242 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.308414 kubelet[1742]: E0430 00:59:28.308264 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.308483 kubelet[1742]: E0430 00:59:28.308471 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.308483 kubelet[1742]: W0430 00:59:28.308482 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.308539 kubelet[1742]: E0430 00:59:28.308521 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.308691 kubelet[1742]: E0430 00:59:28.308675 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.308691 kubelet[1742]: W0430 00:59:28.308683 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.308814 kubelet[1742]: E0430 00:59:28.308736 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.308848 kubelet[1742]: E0430 00:59:28.308834 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.308848 kubelet[1742]: W0430 00:59:28.308844 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.308922 kubelet[1742]: E0430 00:59:28.308911 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.309079 kubelet[1742]: E0430 00:59:28.309051 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.309079 kubelet[1742]: W0430 00:59:28.309062 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.309146 kubelet[1742]: E0430 00:59:28.309099 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.309213 kubelet[1742]: E0430 00:59:28.309201 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.309213 kubelet[1742]: W0430 00:59:28.309211 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.309265 kubelet[1742]: E0430 00:59:28.309251 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.309343 kubelet[1742]: E0430 00:59:28.309333 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.309343 kubelet[1742]: W0430 00:59:28.309343 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.309445 kubelet[1742]: E0430 00:59:28.309413 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.309569 kubelet[1742]: E0430 00:59:28.309461 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.309569 kubelet[1742]: W0430 00:59:28.309468 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.309569 kubelet[1742]: E0430 00:59:28.309480 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.309707 kubelet[1742]: E0430 00:59:28.309693 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.309764 kubelet[1742]: W0430 00:59:28.309752 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.309836 kubelet[1742]: E0430 00:59:28.309825 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.310051 kubelet[1742]: E0430 00:59:28.310036 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.310051 kubelet[1742]: W0430 00:59:28.310050 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.310118 kubelet[1742]: E0430 00:59:28.310064 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.310286 kubelet[1742]: E0430 00:59:28.310258 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.310286 kubelet[1742]: W0430 00:59:28.310270 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.310387 kubelet[1742]: E0430 00:59:28.310335 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.310411 kubelet[1742]: E0430 00:59:28.310399 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.310411 kubelet[1742]: W0430 00:59:28.310406 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.310460 kubelet[1742]: E0430 00:59:28.310435 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.310539 kubelet[1742]: E0430 00:59:28.310527 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.310539 kubelet[1742]: W0430 00:59:28.310537 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.310585 kubelet[1742]: E0430 00:59:28.310563 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.310735 kubelet[1742]: E0430 00:59:28.310722 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.310735 kubelet[1742]: W0430 00:59:28.310732 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.310813 kubelet[1742]: E0430 00:59:28.310800 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.311041 kubelet[1742]: E0430 00:59:28.311026 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.311041 kubelet[1742]: W0430 00:59:28.311039 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.311139 kubelet[1742]: E0430 00:59:28.311053 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.311281 kubelet[1742]: E0430 00:59:28.311267 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.311303 kubelet[1742]: W0430 00:59:28.311281 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.311303 kubelet[1742]: E0430 00:59:28.311297 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.311503 kubelet[1742]: E0430 00:59:28.311491 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.311503 kubelet[1742]: W0430 00:59:28.311502 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.311562 kubelet[1742]: E0430 00:59:28.311516 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.311778 kubelet[1742]: E0430 00:59:28.311763 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.311778 kubelet[1742]: W0430 00:59:28.311775 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.311846 kubelet[1742]: E0430 00:59:28.311788 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.312058 kubelet[1742]: E0430 00:59:28.312044 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.312058 kubelet[1742]: W0430 00:59:28.312055 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.312129 kubelet[1742]: E0430 00:59:28.312100 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.312471 kubelet[1742]: E0430 00:59:28.312455 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.312471 kubelet[1742]: W0430 00:59:28.312469 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.312537 kubelet[1742]: E0430 00:59:28.312479 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.312712 kubelet[1742]: E0430 00:59:28.312700 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.312712 kubelet[1742]: W0430 00:59:28.312712 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.312766 kubelet[1742]: E0430 00:59:28.312720 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.314970 kubelet[1742]: E0430 00:59:28.314953 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.315116 kubelet[1742]: W0430 00:59:28.315054 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.315116 kubelet[1742]: E0430 00:59:28.315073 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.318069 kubelet[1742]: E0430 00:59:28.318007 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.318069 kubelet[1742]: W0430 00:59:28.318021 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.318069 kubelet[1742]: E0430 00:59:28.318034 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.322924 kubelet[1742]: E0430 00:59:28.322789 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.322924 kubelet[1742]: W0430 00:59:28.322806 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.322924 kubelet[1742]: E0430 00:59:28.322822 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.324957 kubelet[1742]: E0430 00:59:28.323167 1742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:59:28.324957 kubelet[1742]: W0430 00:59:28.323180 1742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:59:28.324957 kubelet[1742]: E0430 00:59:28.323199 1742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:59:28.510862 kubelet[1742]: E0430 00:59:28.510717 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:28.511624 containerd[1427]: time="2025-04-30T00:59:28.511537504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-whd9b,Uid:db83bb6f-9c71-4d34-8e02-95ccf64fbc13,Namespace:kube-system,Attempt:0,}" Apr 30 00:59:28.515235 kubelet[1742]: E0430 00:59:28.515208 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:28.515779 containerd[1427]: time="2025-04-30T00:59:28.515738290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9lmr2,Uid:db3a1930-06aa-4a95-a8e8-35314323f3d1,Namespace:calico-system,Attempt:0,}" Apr 30 00:59:29.187994 kubelet[1742]: E0430 00:59:29.187945 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:29.219224 containerd[1427]: time="2025-04-30T00:59:29.219168969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:59:29.221359 containerd[1427]: time="2025-04-30T00:59:29.221022603Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:59:29.222946 containerd[1427]: time="2025-04-30T00:59:29.222890778Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Apr 30 00:59:29.223842 containerd[1427]: time="2025-04-30T00:59:29.223795117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:59:29.224767 containerd[1427]: time="2025-04-30T00:59:29.224715085Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:59:29.228377 containerd[1427]: time="2025-04-30T00:59:29.228320795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:59:29.229147 containerd[1427]: time="2025-04-30T00:59:29.229113345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 713.29631ms" Apr 30 00:59:29.234356 containerd[1427]: time="2025-04-30T00:59:29.234304558Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 722.675929ms" Apr 30 00:59:29.317105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686543809.mount: Deactivated successfully. Apr 30 00:59:29.374741 containerd[1427]: time="2025-04-30T00:59:29.374462465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:59:29.374741 containerd[1427]: time="2025-04-30T00:59:29.374513908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:59:29.374741 containerd[1427]: time="2025-04-30T00:59:29.374573367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:29.374876 containerd[1427]: time="2025-04-30T00:59:29.374747072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:59:29.375672 containerd[1427]: time="2025-04-30T00:59:29.374944585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:59:29.375714 containerd[1427]: time="2025-04-30T00:59:29.375660072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:29.376308 containerd[1427]: time="2025-04-30T00:59:29.376256115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:29.376308 containerd[1427]: time="2025-04-30T00:59:29.376228763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:29.458176 systemd[1]: Started cri-containerd-589bcb140dc6e8d8354ff6f51f130317067a872143d1edb36879f549f643ed59.scope - libcontainer container 589bcb140dc6e8d8354ff6f51f130317067a872143d1edb36879f549f643ed59. Apr 30 00:59:29.459593 systemd[1]: Started cri-containerd-c6aead1c8e611c6146b9233eb94ce2deff8a7eec5213c76570d6a39767fea1d4.scope - libcontainer container c6aead1c8e611c6146b9233eb94ce2deff8a7eec5213c76570d6a39767fea1d4. Apr 30 00:59:29.480757 containerd[1427]: time="2025-04-30T00:59:29.480581019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9lmr2,Uid:db3a1930-06aa-4a95-a8e8-35314323f3d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"589bcb140dc6e8d8354ff6f51f130317067a872143d1edb36879f549f643ed59\"" Apr 30 00:59:29.482061 kubelet[1742]: E0430 00:59:29.482034 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:29.482700 containerd[1427]: time="2025-04-30T00:59:29.482358516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-whd9b,Uid:db83bb6f-9c71-4d34-8e02-95ccf64fbc13,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6aead1c8e611c6146b9233eb94ce2deff8a7eec5213c76570d6a39767fea1d4\"" Apr 30 00:59:29.483161 kubelet[1742]: E0430 00:59:29.483137 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:29.483322 containerd[1427]: time="2025-04-30T00:59:29.483297780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 00:59:30.188341 kubelet[1742]: E0430 00:59:30.188288 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:30.352984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1845403596.mount: Deactivated successfully. Apr 30 00:59:30.392650 kubelet[1742]: E0430 00:59:30.391967 1742 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4nn59" podUID="2e006cc2-2ba8-413a-9a6d-b9e91401ca76" Apr 30 00:59:30.426533 containerd[1427]: time="2025-04-30T00:59:30.426480629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:30.428585 containerd[1427]: time="2025-04-30T00:59:30.428551097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6492223" Apr 30 00:59:30.429627 containerd[1427]: time="2025-04-30T00:59:30.429566570Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:30.434937 containerd[1427]: time="2025-04-30T00:59:30.434844328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:30.435688 containerd[1427]: time="2025-04-30T00:59:30.435491244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 952.08668ms" Apr 30 00:59:30.435688 containerd[1427]: time="2025-04-30T00:59:30.435527750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" Apr 30 00:59:30.436771 containerd[1427]: time="2025-04-30T00:59:30.436664250Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:59:30.438005 containerd[1427]: time="2025-04-30T00:59:30.437973054Z" level=info msg="CreateContainer within sandbox \"589bcb140dc6e8d8354ff6f51f130317067a872143d1edb36879f549f643ed59\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 00:59:30.458543 containerd[1427]: time="2025-04-30T00:59:30.458419947Z" level=info msg="CreateContainer within sandbox \"589bcb140dc6e8d8354ff6f51f130317067a872143d1edb36879f549f643ed59\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"67a2470b57d51219a292ac61c2b9a9773ac7b134f5817e17739176d4468f7bb1\"" Apr 30 00:59:30.459359 containerd[1427]: time="2025-04-30T00:59:30.459267864Z" level=info msg="StartContainer for \"67a2470b57d51219a292ac61c2b9a9773ac7b134f5817e17739176d4468f7bb1\"" Apr 30 00:59:30.489107 systemd[1]: Started cri-containerd-67a2470b57d51219a292ac61c2b9a9773ac7b134f5817e17739176d4468f7bb1.scope - libcontainer container 67a2470b57d51219a292ac61c2b9a9773ac7b134f5817e17739176d4468f7bb1. Apr 30 00:59:30.512413 containerd[1427]: time="2025-04-30T00:59:30.512341321Z" level=info msg="StartContainer for \"67a2470b57d51219a292ac61c2b9a9773ac7b134f5817e17739176d4468f7bb1\" returns successfully" Apr 30 00:59:30.553074 systemd[1]: cri-containerd-67a2470b57d51219a292ac61c2b9a9773ac7b134f5817e17739176d4468f7bb1.scope: Deactivated successfully. Apr 30 00:59:30.605315 containerd[1427]: time="2025-04-30T00:59:30.605237750Z" level=info msg="shim disconnected" id=67a2470b57d51219a292ac61c2b9a9773ac7b134f5817e17739176d4468f7bb1 namespace=k8s.io Apr 30 00:59:30.605315 containerd[1427]: time="2025-04-30T00:59:30.605293213Z" level=warning msg="cleaning up after shim disconnected" id=67a2470b57d51219a292ac61c2b9a9773ac7b134f5817e17739176d4468f7bb1 namespace=k8s.io Apr 30 00:59:30.605315 containerd[1427]: time="2025-04-30T00:59:30.605301423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:59:31.188466 kubelet[1742]: E0430 00:59:31.188429 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:31.334576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67a2470b57d51219a292ac61c2b9a9773ac7b134f5817e17739176d4468f7bb1-rootfs.mount: Deactivated successfully. Apr 30 00:59:31.386348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852347455.mount: Deactivated successfully. Apr 30 00:59:31.409020 kubelet[1742]: E0430 00:59:31.408991 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:31.603030 containerd[1427]: time="2025-04-30T00:59:31.602535261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:31.603525 containerd[1427]: time="2025-04-30T00:59:31.603496500Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" Apr 30 00:59:31.604591 containerd[1427]: time="2025-04-30T00:59:31.604565120Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:31.610902 containerd[1427]: time="2025-04-30T00:59:31.610594049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:31.611679 containerd[1427]: time="2025-04-30T00:59:31.611629690Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.174930511s" Apr 30 00:59:31.611679 containerd[1427]: time="2025-04-30T00:59:31.611667254Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 00:59:31.612834 containerd[1427]: time="2025-04-30T00:59:31.612585138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 00:59:31.614021 containerd[1427]: time="2025-04-30T00:59:31.613986317Z" level=info msg="CreateContainer within sandbox \"c6aead1c8e611c6146b9233eb94ce2deff8a7eec5213c76570d6a39767fea1d4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:59:31.626459 containerd[1427]: time="2025-04-30T00:59:31.626353005Z" level=info msg="CreateContainer within sandbox \"c6aead1c8e611c6146b9233eb94ce2deff8a7eec5213c76570d6a39767fea1d4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"99b10ee398f2e33208345c01ab04900a63e0f517b65a35276e8779e2b8399d87\"" Apr 30 00:59:31.627173 containerd[1427]: time="2025-04-30T00:59:31.627138132Z" level=info msg="StartContainer for \"99b10ee398f2e33208345c01ab04900a63e0f517b65a35276e8779e2b8399d87\"" Apr 30 00:59:31.654105 systemd[1]: Started cri-containerd-99b10ee398f2e33208345c01ab04900a63e0f517b65a35276e8779e2b8399d87.scope - libcontainer container 99b10ee398f2e33208345c01ab04900a63e0f517b65a35276e8779e2b8399d87. Apr 30 00:59:31.674353 containerd[1427]: time="2025-04-30T00:59:31.674194970Z" level=info msg="StartContainer for \"99b10ee398f2e33208345c01ab04900a63e0f517b65a35276e8779e2b8399d87\" returns successfully" Apr 30 00:59:32.188888 kubelet[1742]: E0430 00:59:32.188826 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:32.391856 kubelet[1742]: E0430 00:59:32.391751 1742 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4nn59" podUID="2e006cc2-2ba8-413a-9a6d-b9e91401ca76" Apr 30 00:59:32.412371 kubelet[1742]: E0430 00:59:32.411897 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:33.189300 kubelet[1742]: E0430 00:59:33.189258 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:33.413733 kubelet[1742]: E0430 00:59:33.413639 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:33.702581 containerd[1427]: time="2025-04-30T00:59:33.702532101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:33.703781 containerd[1427]: time="2025-04-30T00:59:33.703747942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" Apr 30 00:59:33.704893 containerd[1427]: time="2025-04-30T00:59:33.704835049Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:33.708403 containerd[1427]: time="2025-04-30T00:59:33.707318908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:33.708403 containerd[1427]: time="2025-04-30T00:59:33.708039336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.095421068s" Apr 30 00:59:33.708403 containerd[1427]: time="2025-04-30T00:59:33.708063154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" Apr 30 00:59:33.710441 containerd[1427]: time="2025-04-30T00:59:33.710408519Z" level=info msg="CreateContainer within sandbox \"589bcb140dc6e8d8354ff6f51f130317067a872143d1edb36879f549f643ed59\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:59:33.722126 containerd[1427]: time="2025-04-30T00:59:33.722077061Z" level=info msg="CreateContainer within sandbox \"589bcb140dc6e8d8354ff6f51f130317067a872143d1edb36879f549f643ed59\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e6f2677dbde2af71b1ada9c55f919439c77209a9246027f3768c6a8a6ddc1e1a\"" Apr 30 00:59:33.723321 containerd[1427]: time="2025-04-30T00:59:33.723294870Z" level=info msg="StartContainer for \"e6f2677dbde2af71b1ada9c55f919439c77209a9246027f3768c6a8a6ddc1e1a\"" Apr 30 00:59:33.754130 systemd[1]: Started cri-containerd-e6f2677dbde2af71b1ada9c55f919439c77209a9246027f3768c6a8a6ddc1e1a.scope - libcontainer container e6f2677dbde2af71b1ada9c55f919439c77209a9246027f3768c6a8a6ddc1e1a. Apr 30 00:59:33.789911 containerd[1427]: time="2025-04-30T00:59:33.789848142Z" level=info msg="StartContainer for \"e6f2677dbde2af71b1ada9c55f919439c77209a9246027f3768c6a8a6ddc1e1a\" returns successfully" Apr 30 00:59:34.189611 kubelet[1742]: E0430 00:59:34.189576 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:34.231695 containerd[1427]: time="2025-04-30T00:59:34.231643547Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:59:34.233519 systemd[1]: cri-containerd-e6f2677dbde2af71b1ada9c55f919439c77209a9246027f3768c6a8a6ddc1e1a.scope: Deactivated successfully. Apr 30 00:59:34.248630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6f2677dbde2af71b1ada9c55f919439c77209a9246027f3768c6a8a6ddc1e1a-rootfs.mount: Deactivated successfully. Apr 30 00:59:34.287295 kubelet[1742]: I0430 00:59:34.287264 1742 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:59:34.395854 systemd[1]: Created slice kubepods-besteffort-pod2e006cc2_2ba8_413a_9a6d_b9e91401ca76.slice - libcontainer container kubepods-besteffort-pod2e006cc2_2ba8_413a_9a6d_b9e91401ca76.slice. Apr 30 00:59:34.397950 containerd[1427]: time="2025-04-30T00:59:34.397891414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4nn59,Uid:2e006cc2-2ba8-413a-9a6d-b9e91401ca76,Namespace:calico-system,Attempt:0,}" Apr 30 00:59:34.405088 containerd[1427]: time="2025-04-30T00:59:34.405035145Z" level=info msg="shim disconnected" id=e6f2677dbde2af71b1ada9c55f919439c77209a9246027f3768c6a8a6ddc1e1a namespace=k8s.io Apr 30 00:59:34.405088 containerd[1427]: time="2025-04-30T00:59:34.405084484Z" level=warning msg="cleaning up after shim disconnected" id=e6f2677dbde2af71b1ada9c55f919439c77209a9246027f3768c6a8a6ddc1e1a namespace=k8s.io Apr 30 00:59:34.405088 containerd[1427]: time="2025-04-30T00:59:34.405094119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:59:34.415098 containerd[1427]: time="2025-04-30T00:59:34.415059091Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:59:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:59:34.418316 kubelet[1742]: E0430 00:59:34.418290 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:34.418959 containerd[1427]: time="2025-04-30T00:59:34.418876548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 00:59:34.444959 kubelet[1742]: I0430 00:59:34.443356 1742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-whd9b" podStartSLOduration=6.314435349 podStartE2EDuration="8.443338623s" podCreationTimestamp="2025-04-30 00:59:26 +0000 UTC" firstStartedPulling="2025-04-30 00:59:29.483569335 +0000 UTC m=+4.551842298" lastFinishedPulling="2025-04-30 00:59:31.61247261 +0000 UTC m=+6.680745572" observedRunningTime="2025-04-30 00:59:32.424030558 +0000 UTC m=+7.492303520" watchObservedRunningTime="2025-04-30 00:59:34.443338623 +0000 UTC m=+9.511611585" Apr 30 00:59:34.533380 containerd[1427]: time="2025-04-30T00:59:34.533327957Z" level=error msg="Failed to destroy network for sandbox \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:59:34.533673 containerd[1427]: time="2025-04-30T00:59:34.533639569Z" level=error msg="encountered an error cleaning up failed sandbox \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:59:34.533713 containerd[1427]: time="2025-04-30T00:59:34.533700028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4nn59,Uid:2e006cc2-2ba8-413a-9a6d-b9e91401ca76,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:59:34.533969 kubelet[1742]: E0430 00:59:34.533907 1742 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:59:34.534020 kubelet[1742]: E0430 00:59:34.533995 1742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4nn59" Apr 30 00:59:34.534020 kubelet[1742]: E0430 00:59:34.534015 1742 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4nn59" Apr 30 00:59:34.534074 kubelet[1742]: E0430 00:59:34.534054 1742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4nn59_calico-system(2e006cc2-2ba8-413a-9a6d-b9e91401ca76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4nn59_calico-system(2e006cc2-2ba8-413a-9a6d-b9e91401ca76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4nn59" podUID="2e006cc2-2ba8-413a-9a6d-b9e91401ca76" Apr 30 00:59:35.190238 kubelet[1742]: E0430 00:59:35.190197 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:35.420689 kubelet[1742]: I0430 00:59:35.420649 1742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" Apr 30 00:59:35.423947 containerd[1427]: time="2025-04-30T00:59:35.421428073Z" level=info msg="StopPodSandbox for \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\"" Apr 30 00:59:35.423947 containerd[1427]: time="2025-04-30T00:59:35.421597852Z" level=info msg="Ensure that sandbox 8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b in task-service has been cleanup successfully" Apr 30 00:59:35.462397 containerd[1427]: time="2025-04-30T00:59:35.462270735Z" level=error msg="StopPodSandbox for \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\" failed" error="failed to destroy network for sandbox \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:59:35.462529 kubelet[1742]: E0430 00:59:35.462488 1742 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" Apr 30 00:59:35.462581 kubelet[1742]: E0430 00:59:35.462541 1742 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b"} Apr 30 00:59:35.462613 kubelet[1742]: E0430 00:59:35.462595 1742 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e006cc2-2ba8-413a-9a6d-b9e91401ca76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:59:35.462669 kubelet[1742]: E0430 00:59:35.462616 1742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e006cc2-2ba8-413a-9a6d-b9e91401ca76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4nn59" podUID="2e006cc2-2ba8-413a-9a6d-b9e91401ca76" Apr 30 00:59:36.190628 kubelet[1742]: E0430 00:59:36.190597 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:37.191611 kubelet[1742]: E0430 00:59:37.191560 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:37.465778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount814848779.mount: Deactivated successfully. Apr 30 00:59:37.714431 containerd[1427]: time="2025-04-30T00:59:37.714374797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:37.716596 containerd[1427]: time="2025-04-30T00:59:37.716476394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" Apr 30 00:59:37.717868 containerd[1427]: time="2025-04-30T00:59:37.717808597Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:37.720166 containerd[1427]: time="2025-04-30T00:59:37.720110080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:37.720887 containerd[1427]: time="2025-04-30T00:59:37.720627941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.301668549s" Apr 30 00:59:37.720887 containerd[1427]: time="2025-04-30T00:59:37.720663668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" Apr 30 00:59:37.728115 containerd[1427]: time="2025-04-30T00:59:37.728056949Z" level=info msg="CreateContainer within sandbox \"589bcb140dc6e8d8354ff6f51f130317067a872143d1edb36879f549f643ed59\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 00:59:37.743716 containerd[1427]: time="2025-04-30T00:59:37.743652318Z" level=info msg="CreateContainer within sandbox \"589bcb140dc6e8d8354ff6f51f130317067a872143d1edb36879f549f643ed59\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fca694a207163b744c4e4e148b18ee62906061837a9158fe0ac529e4b4a2c8e4\"" Apr 30 00:59:37.744519 containerd[1427]: time="2025-04-30T00:59:37.744478089Z" level=info msg="StartContainer for \"fca694a207163b744c4e4e148b18ee62906061837a9158fe0ac529e4b4a2c8e4\"" Apr 30 00:59:37.770100 systemd[1]: Started cri-containerd-fca694a207163b744c4e4e148b18ee62906061837a9158fe0ac529e4b4a2c8e4.scope - libcontainer container fca694a207163b744c4e4e148b18ee62906061837a9158fe0ac529e4b4a2c8e4. Apr 30 00:59:37.798427 containerd[1427]: time="2025-04-30T00:59:37.798335053Z" level=info msg="StartContainer for \"fca694a207163b744c4e4e148b18ee62906061837a9158fe0ac529e4b4a2c8e4\" returns successfully" Apr 30 00:59:38.021817 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 00:59:38.022017 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 00:59:38.165525 kubelet[1742]: I0430 00:59:38.163545 1742 topology_manager.go:215] "Topology Admit Handler" podUID="96eddde2-4d42-4fec-b0a3-89b02f43da95" podNamespace="default" podName="nginx-deployment-85f456d6dd-n4spn" Apr 30 00:59:38.170584 systemd[1]: Created slice kubepods-besteffort-pod96eddde2_4d42_4fec_b0a3_89b02f43da95.slice - libcontainer container kubepods-besteffort-pod96eddde2_4d42_4fec_b0a3_89b02f43da95.slice. Apr 30 00:59:38.179258 kubelet[1742]: I0430 00:59:38.179208 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q8jp\" (UniqueName: \"kubernetes.io/projected/96eddde2-4d42-4fec-b0a3-89b02f43da95-kube-api-access-5q8jp\") pod \"nginx-deployment-85f456d6dd-n4spn\" (UID: \"96eddde2-4d42-4fec-b0a3-89b02f43da95\") " pod="default/nginx-deployment-85f456d6dd-n4spn" Apr 30 00:59:38.192183 kubelet[1742]: E0430 00:59:38.192141 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:38.439292 kubelet[1742]: E0430 00:59:38.439160 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:38.459422 kubelet[1742]: I0430 00:59:38.459366 1742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9lmr2" podStartSLOduration=4.220578198 podStartE2EDuration="12.45935172s" podCreationTimestamp="2025-04-30 00:59:26 +0000 UTC" firstStartedPulling="2025-04-30 00:59:29.482672732 +0000 UTC m=+4.550945694" lastFinishedPulling="2025-04-30 00:59:37.721446294 +0000 UTC m=+12.789719216" observedRunningTime="2025-04-30 00:59:38.459208054 +0000 UTC m=+13.527481016" watchObservedRunningTime="2025-04-30 00:59:38.45935172 +0000 UTC m=+13.527624682" Apr 30 00:59:38.474109 containerd[1427]: time="2025-04-30T00:59:38.474017613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-n4spn,Uid:96eddde2-4d42-4fec-b0a3-89b02f43da95,Namespace:default,Attempt:0,}" Apr 30 00:59:38.647194 systemd-networkd[1373]: cali50a70fe442d: Link UP Apr 30 00:59:38.647372 systemd-networkd[1373]: cali50a70fe442d: Gained carrier Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.506 [INFO][2344] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.526 [INFO][2344] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0 nginx-deployment-85f456d6dd- default 96eddde2-4d42-4fec-b0a3-89b02f43da95 883 0 2025-04-30 00:59:38 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.153 nginx-deployment-85f456d6dd-n4spn eth0 default [] [] [kns.default ksa.default.default] cali50a70fe442d [] []}} ContainerID="aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" Namespace="default" Pod="nginx-deployment-85f456d6dd-n4spn" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.526 [INFO][2344] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" Namespace="default" Pod="nginx-deployment-85f456d6dd-n4spn" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.589 [INFO][2358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" HandleID="k8s-pod-network.aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" Workload="10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.603 [INFO][2358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" HandleID="k8s-pod-network.aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" Workload="10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dd90), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.153", "pod":"nginx-deployment-85f456d6dd-n4spn", "timestamp":"2025-04-30 00:59:38.589691794 +0000 UTC"}, Hostname:"10.0.0.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.603 [INFO][2358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.603 [INFO][2358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.603 [INFO][2358] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.153' Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.605 [INFO][2358] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" host="10.0.0.153" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.612 [INFO][2358] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.153" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.617 [INFO][2358] ipam/ipam.go 489: Trying affinity for 192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.620 [INFO][2358] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.625 [INFO][2358] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.625 [INFO][2358] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" host="10.0.0.153" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.627 [INFO][2358] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.633 [INFO][2358] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" host="10.0.0.153" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.640 [INFO][2358] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.129/26] block=192.168.31.128/26 handle="k8s-pod-network.aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" host="10.0.0.153" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.640 [INFO][2358] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.129/26] handle="k8s-pod-network.aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" host="10.0.0.153" Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.640 [INFO][2358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:59:38.657253 containerd[1427]: 2025-04-30 00:59:38.640 [INFO][2358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.129/26] IPv6=[] ContainerID="aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" HandleID="k8s-pod-network.aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" Workload="10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0" Apr 30 00:59:38.657997 containerd[1427]: 2025-04-30 00:59:38.642 [INFO][2344] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" Namespace="default" Pod="nginx-deployment-85f456d6dd-n4spn" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"96eddde2-4d42-4fec-b0a3-89b02f43da95", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-n4spn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali50a70fe442d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:59:38.657997 containerd[1427]: 2025-04-30 00:59:38.642 [INFO][2344] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.129/32] ContainerID="aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" Namespace="default" Pod="nginx-deployment-85f456d6dd-n4spn" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0" Apr 30 00:59:38.657997 containerd[1427]: 2025-04-30 00:59:38.642 [INFO][2344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50a70fe442d ContainerID="aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" Namespace="default" Pod="nginx-deployment-85f456d6dd-n4spn" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0" Apr 30 00:59:38.657997 containerd[1427]: 2025-04-30 00:59:38.647 [INFO][2344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" Namespace="default" Pod="nginx-deployment-85f456d6dd-n4spn" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0" Apr 30 00:59:38.657997 containerd[1427]: 2025-04-30 00:59:38.647 [INFO][2344] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" Namespace="default" Pod="nginx-deployment-85f456d6dd-n4spn" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"96eddde2-4d42-4fec-b0a3-89b02f43da95", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe", Pod:"nginx-deployment-85f456d6dd-n4spn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali50a70fe442d", MAC:"92:74:dd:36:aa:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:59:38.657997 containerd[1427]: 2025-04-30 00:59:38.655 [INFO][2344] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe" Namespace="default" Pod="nginx-deployment-85f456d6dd-n4spn" WorkloadEndpoint="10.0.0.153-k8s-nginx--deployment--85f456d6dd--n4spn-eth0" Apr 30 00:59:38.672953 containerd[1427]: time="2025-04-30T00:59:38.672798414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:59:38.672953 containerd[1427]: time="2025-04-30T00:59:38.672910893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:59:38.672953 containerd[1427]: time="2025-04-30T00:59:38.672942601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:38.673218 containerd[1427]: time="2025-04-30T00:59:38.673049509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:38.693140 systemd[1]: Started cri-containerd-aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe.scope - libcontainer container aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe. Apr 30 00:59:38.703036 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:59:38.719305 containerd[1427]: time="2025-04-30T00:59:38.719262669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-n4spn,Uid:96eddde2-4d42-4fec-b0a3-89b02f43da95,Namespace:default,Attempt:0,} returns sandbox id \"aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe\"" Apr 30 00:59:38.721158 containerd[1427]: time="2025-04-30T00:59:38.721129126Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Apr 30 00:59:39.192730 kubelet[1742]: E0430 00:59:39.192687 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:39.441255 kubelet[1742]: I0430 00:59:39.441137 1742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:59:39.441881 kubelet[1742]: E0430 00:59:39.441861 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:39.841069 systemd-networkd[1373]: cali50a70fe442d: Gained IPv6LL Apr 30 00:59:40.193055 kubelet[1742]: E0430 00:59:40.193013 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:40.466749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3014835991.mount: Deactivated successfully. Apr 30 00:59:41.194224 kubelet[1742]: E0430 00:59:41.194161 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:41.383957 containerd[1427]: time="2025-04-30T00:59:41.383885209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:41.384490 containerd[1427]: time="2025-04-30T00:59:41.384358125Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948638" Apr 30 00:59:41.385958 containerd[1427]: time="2025-04-30T00:59:41.385424087Z" level=info msg="ImageCreate event name:\"sha256:e20c52090e36e47716225aae95fda06191c98f3a8d7f6371786c19c9e59befb1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:41.389140 containerd[1427]: time="2025-04-30T00:59:41.389029598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:41.389736 containerd[1427]: time="2025-04-30T00:59:41.389698514Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e20c52090e36e47716225aae95fda06191c98f3a8d7f6371786c19c9e59befb1\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\", size \"69948516\" in 2.668529551s" Apr 30 00:59:41.389792 containerd[1427]: time="2025-04-30T00:59:41.389737089Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e20c52090e36e47716225aae95fda06191c98f3a8d7f6371786c19c9e59befb1\"" Apr 30 00:59:41.392504 containerd[1427]: time="2025-04-30T00:59:41.392468070Z" level=info msg="CreateContainer within sandbox \"aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Apr 30 00:59:41.403966 containerd[1427]: time="2025-04-30T00:59:41.403909416Z" level=info msg="CreateContainer within sandbox \"aaba96767ef52bc323e7857c45f3075844df2f585c7d290c02f844a6cd5e93fe\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"8acc7a9f807589818e30dcc1dce0fa7bcd8551587a31ec30977419a049270bd3\"" Apr 30 00:59:41.404427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2466684911.mount: Deactivated successfully. Apr 30 00:59:41.405143 containerd[1427]: time="2025-04-30T00:59:41.404422949Z" level=info msg="StartContainer for \"8acc7a9f807589818e30dcc1dce0fa7bcd8551587a31ec30977419a049270bd3\"" Apr 30 00:59:41.486127 systemd[1]: Started cri-containerd-8acc7a9f807589818e30dcc1dce0fa7bcd8551587a31ec30977419a049270bd3.scope - libcontainer container 8acc7a9f807589818e30dcc1dce0fa7bcd8551587a31ec30977419a049270bd3. Apr 30 00:59:41.549400 containerd[1427]: time="2025-04-30T00:59:41.549354960Z" level=info msg="StartContainer for \"8acc7a9f807589818e30dcc1dce0fa7bcd8551587a31ec30977419a049270bd3\" returns successfully" Apr 30 00:59:42.194627 kubelet[1742]: E0430 00:59:42.194548 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:42.469732 kubelet[1742]: I0430 00:59:42.469578 1742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-n4spn" podStartSLOduration=1.799262964 podStartE2EDuration="4.469558693s" podCreationTimestamp="2025-04-30 00:59:38 +0000 UTC" firstStartedPulling="2025-04-30 00:59:38.720679889 +0000 UTC m=+13.788952811" lastFinishedPulling="2025-04-30 00:59:41.390975578 +0000 UTC m=+16.459248540" observedRunningTime="2025-04-30 00:59:42.469282668 +0000 UTC m=+17.537555630" watchObservedRunningTime="2025-04-30 00:59:42.469558693 +0000 UTC m=+17.537831655" Apr 30 00:59:43.195259 kubelet[1742]: E0430 00:59:43.195217 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:44.195828 kubelet[1742]: E0430 00:59:44.195777 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:45.196271 kubelet[1742]: E0430 00:59:45.196223 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:45.638967 kernel: bpftool[2765]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 00:59:45.768847 kubelet[1742]: I0430 00:59:45.768780 1742 topology_manager.go:215] "Topology Admit Handler" podUID="e0017a84-64bf-4209-a870-85bbe46d51e9" podNamespace="default" podName="nfs-server-provisioner-0" Apr 30 00:59:45.781900 systemd[1]: Created slice kubepods-besteffort-pode0017a84_64bf_4209_a870_85bbe46d51e9.slice - libcontainer container kubepods-besteffort-pode0017a84_64bf_4209_a870_85bbe46d51e9.slice. Apr 30 00:59:45.819065 systemd-networkd[1373]: vxlan.calico: Link UP Apr 30 00:59:45.819075 systemd-networkd[1373]: vxlan.calico: Gained carrier Apr 30 00:59:45.927438 kubelet[1742]: I0430 00:59:45.927046 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzn4s\" (UniqueName: \"kubernetes.io/projected/e0017a84-64bf-4209-a870-85bbe46d51e9-kube-api-access-wzn4s\") pod \"nfs-server-provisioner-0\" (UID: \"e0017a84-64bf-4209-a870-85bbe46d51e9\") " pod="default/nfs-server-provisioner-0" Apr 30 00:59:45.927438 kubelet[1742]: I0430 00:59:45.927099 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e0017a84-64bf-4209-a870-85bbe46d51e9-data\") pod \"nfs-server-provisioner-0\" (UID: \"e0017a84-64bf-4209-a870-85bbe46d51e9\") " pod="default/nfs-server-provisioner-0" Apr 30 00:59:46.085806 containerd[1427]: time="2025-04-30T00:59:46.085765530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e0017a84-64bf-4209-a870-85bbe46d51e9,Namespace:default,Attempt:0,}" Apr 30 00:59:46.186141 kubelet[1742]: E0430 00:59:46.186019 1742 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:46.197361 kubelet[1742]: E0430 00:59:46.197286 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:46.213076 systemd-networkd[1373]: cali60e51b789ff: Link UP Apr 30 00:59:46.213612 systemd-networkd[1373]: cali60e51b789ff: Gained carrier Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.140 [INFO][2887] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.153-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default e0017a84-64bf-4209-a870-85bbe46d51e9 1015 0 2025-04-30 00:59:45 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.153 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.140 [INFO][2887] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.169 [INFO][2901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" HandleID="k8s-pod-network.d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" Workload="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.180 [INFO][2901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" HandleID="k8s-pod-network.d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" Workload="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d81f0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.153", "pod":"nfs-server-provisioner-0", "timestamp":"2025-04-30 00:59:46.169206153 +0000 UTC"}, Hostname:"10.0.0.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.181 [INFO][2901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.181 [INFO][2901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.181 [INFO][2901] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.153' Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.182 [INFO][2901] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" host="10.0.0.153" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.187 [INFO][2901] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.153" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.192 [INFO][2901] ipam/ipam.go 489: Trying affinity for 192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.194 [INFO][2901] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.197 [INFO][2901] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.197 [INFO][2901] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" host="10.0.0.153" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.199 [INFO][2901] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754 Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.203 [INFO][2901] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" host="10.0.0.153" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.209 [INFO][2901] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.130/26] block=192.168.31.128/26 handle="k8s-pod-network.d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" host="10.0.0.153" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.209 [INFO][2901] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.130/26] handle="k8s-pod-network.d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" host="10.0.0.153" Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.209 [INFO][2901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:59:46.225865 containerd[1427]: 2025-04-30 00:59:46.209 [INFO][2901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.130/26] IPv6=[] ContainerID="d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" HandleID="k8s-pod-network.d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" Workload="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 00:59:46.226470 containerd[1427]: 2025-04-30 00:59:46.211 [INFO][2887] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e0017a84-64bf-4209-a870-85bbe46d51e9", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 59, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:59:46.226470 containerd[1427]: 2025-04-30 00:59:46.211 [INFO][2887] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.130/32] ContainerID="d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 00:59:46.226470 containerd[1427]: 2025-04-30 00:59:46.211 [INFO][2887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 00:59:46.226470 containerd[1427]: 2025-04-30 00:59:46.213 [INFO][2887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 00:59:46.226608 containerd[1427]: 2025-04-30 00:59:46.213 [INFO][2887] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e0017a84-64bf-4209-a870-85bbe46d51e9", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 59, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"e2:5c:9f:70:f0:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:59:46.226608 containerd[1427]: 2025-04-30 00:59:46.224 [INFO][2887] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 00:59:46.261489 containerd[1427]: time="2025-04-30T00:59:46.261258371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:59:46.261489 containerd[1427]: time="2025-04-30T00:59:46.261315693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:59:46.261489 containerd[1427]: time="2025-04-30T00:59:46.261326781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:46.261489 containerd[1427]: time="2025-04-30T00:59:46.261409001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:46.290150 systemd[1]: Started cri-containerd-d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754.scope - libcontainer container d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754. Apr 30 00:59:46.300270 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:59:46.359889 containerd[1427]: time="2025-04-30T00:59:46.359848102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e0017a84-64bf-4209-a870-85bbe46d51e9,Namespace:default,Attempt:0,} returns sandbox id \"d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754\"" Apr 30 00:59:46.361393 containerd[1427]: time="2025-04-30T00:59:46.361362132Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Apr 30 00:59:46.945088 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Apr 30 00:59:47.197530 kubelet[1742]: E0430 00:59:47.197403 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:48.033173 systemd-networkd[1373]: cali60e51b789ff: Gained IPv6LL Apr 30 00:59:48.197667 kubelet[1742]: E0430 00:59:48.197625 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:49.198287 kubelet[1742]: E0430 00:59:49.198200 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:49.391649 containerd[1427]: time="2025-04-30T00:59:49.391600384Z" level=info msg="StopPodSandbox for \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\"" Apr 30 00:59:49.499392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324268297.mount: Deactivated successfully. Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.474 [INFO][2992] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.475 [INFO][2992] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" iface="eth0" netns="/var/run/netns/cni-4f2005ad-3465-3d9f-9ac8-276000a3553c" Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.475 [INFO][2992] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" iface="eth0" netns="/var/run/netns/cni-4f2005ad-3465-3d9f-9ac8-276000a3553c" Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.476 [INFO][2992] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" iface="eth0" netns="/var/run/netns/cni-4f2005ad-3465-3d9f-9ac8-276000a3553c" Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.476 [INFO][2992] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.476 [INFO][2992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.498 [INFO][3002] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" HandleID="k8s-pod-network.8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" Workload="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.498 [INFO][3002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.498 [INFO][3002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.508 [WARNING][3002] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" HandleID="k8s-pod-network.8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" Workload="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.508 [INFO][3002] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" HandleID="k8s-pod-network.8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" Workload="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.509 [INFO][3002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:59:49.512105 containerd[1427]: 2025-04-30 00:59:49.510 [INFO][2992] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b" Apr 30 00:59:49.512662 containerd[1427]: time="2025-04-30T00:59:49.512271261Z" level=info msg="TearDown network for sandbox \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\" successfully" Apr 30 00:59:49.512662 containerd[1427]: time="2025-04-30T00:59:49.512300723Z" level=info msg="StopPodSandbox for \"8a556464f2e3bd34e169ac907f460d9b3af73b2e96fd0f3b2d2a765e8bbee69b\" returns successfully" Apr 30 00:59:49.513242 containerd[1427]: time="2025-04-30T00:59:49.513012289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4nn59,Uid:2e006cc2-2ba8-413a-9a6d-b9e91401ca76,Namespace:calico-system,Attempt:1,}" Apr 30 00:59:49.513811 systemd[1]: run-netns-cni\x2d4f2005ad\x2d3465\x2d3d9f\x2d9ac8\x2d276000a3553c.mount: Deactivated successfully. Apr 30 00:59:49.728418 systemd-networkd[1373]: cali2471d3d52b0: Link UP Apr 30 00:59:49.728611 systemd-networkd[1373]: cali2471d3d52b0: Gained carrier Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.645 [INFO][3013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.153-k8s-csi--node--driver--4nn59-eth0 csi-node-driver- calico-system 2e006cc2-2ba8-413a-9a6d-b9e91401ca76 1038 0 2025-04-30 00:59:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.153 csi-node-driver-4nn59 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2471d3d52b0 [] []}} ContainerID="4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" Namespace="calico-system" Pod="csi-node-driver-4nn59" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--4nn59-" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.645 [INFO][3013] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" Namespace="calico-system" Pod="csi-node-driver-4nn59" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.687 [INFO][3030] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" HandleID="k8s-pod-network.4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" Workload="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.701 [INFO][3030] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" HandleID="k8s-pod-network.4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" Workload="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b2040), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.153", "pod":"csi-node-driver-4nn59", "timestamp":"2025-04-30 00:59:49.687427737 +0000 UTC"}, Hostname:"10.0.0.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.701 [INFO][3030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.701 [INFO][3030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.701 [INFO][3030] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.153' Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.702 [INFO][3030] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" host="10.0.0.153" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.706 [INFO][3030] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.153" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.710 [INFO][3030] ipam/ipam.go 489: Trying affinity for 192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.712 [INFO][3030] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.714 [INFO][3030] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.714 [INFO][3030] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" host="10.0.0.153" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.716 [INFO][3030] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4 Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.719 [INFO][3030] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" host="10.0.0.153" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.724 [INFO][3030] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.131/26] block=192.168.31.128/26 handle="k8s-pod-network.4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" host="10.0.0.153" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.724 [INFO][3030] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.131/26] handle="k8s-pod-network.4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" host="10.0.0.153" Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.724 [INFO][3030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:59:49.743478 containerd[1427]: 2025-04-30 00:59:49.724 [INFO][3030] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.131/26] IPv6=[] ContainerID="4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" HandleID="k8s-pod-network.4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" Workload="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" Apr 30 00:59:49.744122 containerd[1427]: 2025-04-30 00:59:49.726 [INFO][3013] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" Namespace="calico-system" Pod="csi-node-driver-4nn59" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-csi--node--driver--4nn59-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e006cc2-2ba8-413a-9a6d-b9e91401ca76", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 59, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"", Pod:"csi-node-driver-4nn59", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2471d3d52b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:59:49.744122 containerd[1427]: 2025-04-30 00:59:49.726 [INFO][3013] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.131/32] ContainerID="4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" Namespace="calico-system" Pod="csi-node-driver-4nn59" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" Apr 30 00:59:49.744122 containerd[1427]: 2025-04-30 00:59:49.726 [INFO][3013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2471d3d52b0 ContainerID="4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" Namespace="calico-system" Pod="csi-node-driver-4nn59" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" Apr 30 00:59:49.744122 containerd[1427]: 2025-04-30 00:59:49.728 [INFO][3013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" Namespace="calico-system" Pod="csi-node-driver-4nn59" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" Apr 30 00:59:49.744122 containerd[1427]: 2025-04-30 00:59:49.729 [INFO][3013] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" Namespace="calico-system" Pod="csi-node-driver-4nn59" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-csi--node--driver--4nn59-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e006cc2-2ba8-413a-9a6d-b9e91401ca76", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 59, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4", Pod:"csi-node-driver-4nn59", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2471d3d52b0", MAC:"42:8a:cf:fd:7d:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:59:49.744122 containerd[1427]: 2025-04-30 00:59:49.741 [INFO][3013] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4" Namespace="calico-system" Pod="csi-node-driver-4nn59" WorkloadEndpoint="10.0.0.153-k8s-csi--node--driver--4nn59-eth0" Apr 30 00:59:49.781416 containerd[1427]: time="2025-04-30T00:59:49.781240659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:59:49.781416 containerd[1427]: time="2025-04-30T00:59:49.781299383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:59:49.781416 containerd[1427]: time="2025-04-30T00:59:49.781321879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:49.781416 containerd[1427]: time="2025-04-30T00:59:49.781408903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:49.803464 systemd[1]: Started cri-containerd-4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4.scope - libcontainer container 4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4. Apr 30 00:59:49.812221 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:59:49.822971 containerd[1427]: time="2025-04-30T00:59:49.822753757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4nn59,Uid:2e006cc2-2ba8-413a-9a6d-b9e91401ca76,Namespace:calico-system,Attempt:1,} returns sandbox id \"4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4\"" Apr 30 00:59:50.198420 kubelet[1742]: E0430 00:59:50.198359 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:50.821063 containerd[1427]: time="2025-04-30T00:59:50.820997880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:50.821646 containerd[1427]: time="2025-04-30T00:59:50.821591494Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Apr 30 00:59:50.822496 containerd[1427]: time="2025-04-30T00:59:50.822441847Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:50.825789 containerd[1427]: time="2025-04-30T00:59:50.825730462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:50.827075 containerd[1427]: time="2025-04-30T00:59:50.826980254Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 4.465577773s" Apr 30 00:59:50.827075 containerd[1427]: time="2025-04-30T00:59:50.827020643Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Apr 30 00:59:50.828379 containerd[1427]: time="2025-04-30T00:59:50.828254984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 00:59:50.829728 containerd[1427]: time="2025-04-30T00:59:50.829681660Z" level=info msg="CreateContainer within sandbox \"d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Apr 30 00:59:50.843156 containerd[1427]: time="2025-04-30T00:59:50.843104466Z" level=info msg="CreateContainer within sandbox \"d1b2d674229fe0634aba8bc8ecbb12a7ed3aa1b4b6010cf0ed32fae11c898754\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4a366e2819f8c5cb9de1e442f8ecb6e1ce4d7a3fec5a2e5ff736877655ccded2\"" Apr 30 00:59:50.843638 containerd[1427]: time="2025-04-30T00:59:50.843615783Z" level=info msg="StartContainer for \"4a366e2819f8c5cb9de1e442f8ecb6e1ce4d7a3fec5a2e5ff736877655ccded2\"" Apr 30 00:59:50.879159 systemd[1]: Started cri-containerd-4a366e2819f8c5cb9de1e442f8ecb6e1ce4d7a3fec5a2e5ff736877655ccded2.scope - libcontainer container 4a366e2819f8c5cb9de1e442f8ecb6e1ce4d7a3fec5a2e5ff736877655ccded2. Apr 30 00:59:50.994281 containerd[1427]: time="2025-04-30T00:59:50.993753191Z" level=info msg="StartContainer for \"4a366e2819f8c5cb9de1e442f8ecb6e1ce4d7a3fec5a2e5ff736877655ccded2\" returns successfully" Apr 30 00:59:51.105046 systemd-networkd[1373]: cali2471d3d52b0: Gained IPv6LL Apr 30 00:59:51.199120 kubelet[1742]: E0430 00:59:51.199041 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:51.493645 kubelet[1742]: I0430 00:59:51.493575 1742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.026545462 podStartE2EDuration="6.493542155s" podCreationTimestamp="2025-04-30 00:59:45 +0000 UTC" firstStartedPulling="2025-04-30 00:59:46.361124318 +0000 UTC m=+21.429397280" lastFinishedPulling="2025-04-30 00:59:50.82812101 +0000 UTC m=+25.896393973" observedRunningTime="2025-04-30 00:59:51.493460381 +0000 UTC m=+26.561733343" watchObservedRunningTime="2025-04-30 00:59:51.493542155 +0000 UTC m=+26.561815117" Apr 30 00:59:51.721705 containerd[1427]: time="2025-04-30T00:59:51.721653430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:51.726667 containerd[1427]: time="2025-04-30T00:59:51.726615344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" Apr 30 00:59:51.727583 containerd[1427]: time="2025-04-30T00:59:51.727557045Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:51.730330 containerd[1427]: time="2025-04-30T00:59:51.730260788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:51.730671 containerd[1427]: time="2025-04-30T00:59:51.730635235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 902.345667ms" Apr 30 00:59:51.730671 containerd[1427]: time="2025-04-30T00:59:51.730669298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" Apr 30 00:59:51.733016 containerd[1427]: time="2025-04-30T00:59:51.732986226Z" level=info msg="CreateContainer within sandbox \"4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 00:59:51.744531 containerd[1427]: time="2025-04-30T00:59:51.744339075Z" level=info msg="CreateContainer within sandbox \"4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"57f19c5ff7e2a3bb95f422cacc00192871c4f7c6033739dd902fc251a5fa4a91\"" Apr 30 00:59:51.744950 containerd[1427]: time="2025-04-30T00:59:51.744883314Z" level=info msg="StartContainer for \"57f19c5ff7e2a3bb95f422cacc00192871c4f7c6033739dd902fc251a5fa4a91\"" Apr 30 00:59:51.772120 systemd[1]: Started cri-containerd-57f19c5ff7e2a3bb95f422cacc00192871c4f7c6033739dd902fc251a5fa4a91.scope - libcontainer container 57f19c5ff7e2a3bb95f422cacc00192871c4f7c6033739dd902fc251a5fa4a91. Apr 30 00:59:51.797586 containerd[1427]: time="2025-04-30T00:59:51.797540290Z" level=info msg="StartContainer for \"57f19c5ff7e2a3bb95f422cacc00192871c4f7c6033739dd902fc251a5fa4a91\" returns successfully" Apr 30 00:59:51.798796 containerd[1427]: time="2025-04-30T00:59:51.798766419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 00:59:52.200078 kubelet[1742]: E0430 00:59:52.200041 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:52.810976 containerd[1427]: time="2025-04-30T00:59:52.810882733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:52.811514 containerd[1427]: time="2025-04-30T00:59:52.811473822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" Apr 30 00:59:52.812175 containerd[1427]: time="2025-04-30T00:59:52.812144720Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:52.814496 containerd[1427]: time="2025-04-30T00:59:52.814458083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:52.815009 containerd[1427]: time="2025-04-30T00:59:52.814971084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.01616628s" Apr 30 00:59:52.815046 containerd[1427]: time="2025-04-30T00:59:52.815012709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" Apr 30 00:59:52.818931 containerd[1427]: time="2025-04-30T00:59:52.818892490Z" level=info msg="CreateContainer within sandbox \"4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 00:59:52.829535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883470504.mount: Deactivated successfully. Apr 30 00:59:52.832387 containerd[1427]: time="2025-04-30T00:59:52.832350006Z" level=info msg="CreateContainer within sandbox \"4a1110917109783c6fa894de1fdce497190c07686fc740a20e23fa955dc986d4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"be67162342d9c6733b495b9a1b6f389f289e24ab9bc8db554524149e6bc8e9dd\"" Apr 30 00:59:52.832993 containerd[1427]: time="2025-04-30T00:59:52.832964990Z" level=info msg="StartContainer for \"be67162342d9c6733b495b9a1b6f389f289e24ab9bc8db554524149e6bc8e9dd\"" Apr 30 00:59:52.853582 kubelet[1742]: I0430 00:59:52.853305 1742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:59:52.854245 kubelet[1742]: E0430 00:59:52.854217 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:52.881117 systemd[1]: Started cri-containerd-be67162342d9c6733b495b9a1b6f389f289e24ab9bc8db554524149e6bc8e9dd.scope - libcontainer container be67162342d9c6733b495b9a1b6f389f289e24ab9bc8db554524149e6bc8e9dd. Apr 30 00:59:52.907439 containerd[1427]: time="2025-04-30T00:59:52.907389021Z" level=info msg="StartContainer for \"be67162342d9c6733b495b9a1b6f389f289e24ab9bc8db554524149e6bc8e9dd\" returns successfully" Apr 30 00:59:53.201229 kubelet[1742]: E0430 00:59:53.201162 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:53.442290 kubelet[1742]: I0430 00:59:53.442247 1742 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 00:59:53.445320 kubelet[1742]: I0430 00:59:53.445295 1742 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 00:59:53.490245 kubelet[1742]: E0430 00:59:53.490140 1742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:59:53.499772 kubelet[1742]: I0430 00:59:53.499712 1742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4nn59" podStartSLOduration=24.508595394 podStartE2EDuration="27.499696496s" podCreationTimestamp="2025-04-30 00:59:26 +0000 UTC" firstStartedPulling="2025-04-30 00:59:49.824569778 +0000 UTC m=+24.892842740" lastFinishedPulling="2025-04-30 00:59:52.81567088 +0000 UTC m=+27.883943842" observedRunningTime="2025-04-30 00:59:53.499235143 +0000 UTC m=+28.567508105" watchObservedRunningTime="2025-04-30 00:59:53.499696496 +0000 UTC m=+28.567969458" Apr 30 00:59:53.857074 systemd[1]: run-containerd-runc-k8s.io-fca694a207163b744c4e4e148b18ee62906061837a9158fe0ac529e4b4a2c8e4-runc.DABaqi.mount: Deactivated successfully. Apr 30 00:59:54.202257 kubelet[1742]: E0430 00:59:54.202204 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:55.203252 kubelet[1742]: E0430 00:59:55.203207 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:56.203572 kubelet[1742]: E0430 00:59:56.203530 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:57.204656 kubelet[1742]: E0430 00:59:57.204609 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:58.205762 kubelet[1742]: E0430 00:59:58.205690 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:58.241438 kubelet[1742]: I0430 00:59:58.241396 1742 topology_manager.go:215] "Topology Admit Handler" podUID="687e1c2a-b0b3-4b1a-b1ff-6758a9707115" podNamespace="default" podName="test-pod-1" Apr 30 00:59:58.246881 systemd[1]: Created slice kubepods-besteffort-pod687e1c2a_b0b3_4b1a_b1ff_6758a9707115.slice - libcontainer container kubepods-besteffort-pod687e1c2a_b0b3_4b1a_b1ff_6758a9707115.slice. Apr 30 00:59:58.386021 kubelet[1742]: I0430 00:59:58.385984 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxlkd\" (UniqueName: \"kubernetes.io/projected/687e1c2a-b0b3-4b1a-b1ff-6758a9707115-kube-api-access-kxlkd\") pod \"test-pod-1\" (UID: \"687e1c2a-b0b3-4b1a-b1ff-6758a9707115\") " pod="default/test-pod-1" Apr 30 00:59:58.386180 kubelet[1742]: I0430 00:59:58.386034 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7225e547-77e1-40d2-98f6-210fb929daed\" (UniqueName: \"kubernetes.io/nfs/687e1c2a-b0b3-4b1a-b1ff-6758a9707115-pvc-7225e547-77e1-40d2-98f6-210fb929daed\") pod \"test-pod-1\" (UID: \"687e1c2a-b0b3-4b1a-b1ff-6758a9707115\") " pod="default/test-pod-1" Apr 30 00:59:58.510019 kernel: FS-Cache: Loaded Apr 30 00:59:58.538338 kernel: RPC: Registered named UNIX socket transport module. Apr 30 00:59:58.538504 kernel: RPC: Registered udp transport module. Apr 30 00:59:58.538543 kernel: RPC: Registered tcp transport module. Apr 30 00:59:58.538570 kernel: RPC: Registered tcp-with-tls transport module. Apr 30 00:59:58.538985 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Apr 30 00:59:58.715265 kernel: NFS: Registering the id_resolver key type Apr 30 00:59:58.715516 kernel: Key type id_resolver registered Apr 30 00:59:58.715532 kernel: Key type id_legacy registered Apr 30 00:59:58.741411 nfsidmap[3348]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Apr 30 00:59:58.745956 nfsidmap[3351]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Apr 30 00:59:58.849697 containerd[1427]: time="2025-04-30T00:59:58.849472236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:687e1c2a-b0b3-4b1a-b1ff-6758a9707115,Namespace:default,Attempt:0,}" Apr 30 00:59:58.963452 systemd-networkd[1373]: cali5ec59c6bf6e: Link UP Apr 30 00:59:58.964211 systemd-networkd[1373]: cali5ec59c6bf6e: Gained carrier Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.901 [INFO][3354] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.153-k8s-test--pod--1-eth0 default 687e1c2a-b0b3-4b1a-b1ff-6758a9707115 1094 0 2025-04-30 00:59:46 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.153 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.901 [INFO][3354] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.924 [INFO][3368] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" HandleID="k8s-pod-network.76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" Workload="10.0.0.153-k8s-test--pod--1-eth0" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.936 [INFO][3368] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" HandleID="k8s-pod-network.76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" Workload="10.0.0.153-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002db200), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.153", "pod":"test-pod-1", "timestamp":"2025-04-30 00:59:58.924609165 +0000 UTC"}, Hostname:"10.0.0.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.936 [INFO][3368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.936 [INFO][3368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.936 [INFO][3368] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.153' Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.938 [INFO][3368] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" host="10.0.0.153" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.941 [INFO][3368] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.153" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.946 [INFO][3368] ipam/ipam.go 489: Trying affinity for 192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.947 [INFO][3368] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.950 [INFO][3368] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="10.0.0.153" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.950 [INFO][3368] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" host="10.0.0.153" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.951 [INFO][3368] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.954 [INFO][3368] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" host="10.0.0.153" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.959 [INFO][3368] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.132/26] block=192.168.31.128/26 handle="k8s-pod-network.76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" host="10.0.0.153" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.959 [INFO][3368] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.132/26] handle="k8s-pod-network.76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" host="10.0.0.153" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.959 [INFO][3368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.959 [INFO][3368] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.132/26] IPv6=[] ContainerID="76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" HandleID="k8s-pod-network.76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" Workload="10.0.0.153-k8s-test--pod--1-eth0" Apr 30 00:59:58.972884 containerd[1427]: 2025-04-30 00:59:58.961 [INFO][3354] cni-plugin/k8s.go 386: Populated endpoint ContainerID="76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"687e1c2a-b0b3-4b1a-b1ff-6758a9707115", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 59, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:59:58.973444 containerd[1427]: 2025-04-30 00:59:58.961 [INFO][3354] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.132/32] ContainerID="76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" Apr 30 00:59:58.973444 containerd[1427]: 2025-04-30 00:59:58.961 [INFO][3354] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" Apr 30 00:59:58.973444 containerd[1427]: 2025-04-30 00:59:58.963 [INFO][3354] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" Apr 30 00:59:58.973444 containerd[1427]: 2025-04-30 00:59:58.964 [INFO][3354] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.153-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"687e1c2a-b0b3-4b1a-b1ff-6758a9707115", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 59, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.153", ContainerID:"76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ba:ad:71:fe:88:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:59:58.973444 containerd[1427]: 2025-04-30 00:59:58.971 [INFO][3354] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.153-k8s-test--pod--1-eth0" Apr 30 00:59:58.992365 containerd[1427]: time="2025-04-30T00:59:58.992155625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:59:58.992365 containerd[1427]: time="2025-04-30T00:59:58.992202166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:59:58.992365 containerd[1427]: time="2025-04-30T00:59:58.992212971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:58.992365 containerd[1427]: time="2025-04-30T00:59:58.992287365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:59:59.015081 systemd[1]: Started cri-containerd-76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc.scope - libcontainer container 76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc. Apr 30 00:59:59.024529 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:59:59.039504 containerd[1427]: time="2025-04-30T00:59:59.039469712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:687e1c2a-b0b3-4b1a-b1ff-6758a9707115,Namespace:default,Attempt:0,} returns sandbox id \"76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc\"" Apr 30 00:59:59.040805 containerd[1427]: time="2025-04-30T00:59:59.040779954Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Apr 30 00:59:59.206245 kubelet[1742]: E0430 00:59:59.206210 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 00:59:59.286512 containerd[1427]: time="2025-04-30T00:59:59.286457573Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:59:59.286902 containerd[1427]: time="2025-04-30T00:59:59.286867069Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Apr 30 00:59:59.290463 containerd[1427]: time="2025-04-30T00:59:59.290428876Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e20c52090e36e47716225aae95fda06191c98f3a8d7f6371786c19c9e59befb1\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\", size \"69948516\" in 249.612987ms" Apr 30 00:59:59.290463 containerd[1427]: time="2025-04-30T00:59:59.290464331Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e20c52090e36e47716225aae95fda06191c98f3a8d7f6371786c19c9e59befb1\"" Apr 30 00:59:59.292259 containerd[1427]: time="2025-04-30T00:59:59.292233250Z" level=info msg="CreateContainer within sandbox \"76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc\" for container &ContainerMetadata{Name:test,Attempt:0,}" Apr 30 00:59:59.302525 containerd[1427]: time="2025-04-30T00:59:59.302479523Z" level=info msg="CreateContainer within sandbox \"76cbd60db97b01a5cc665fcdb3bd3d56301f61069fb391fb882cbbf232fe72cc\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3d4c4d103ac3af99417a99c8ea7b37215969665b9602bf607233fa04cb0dfd9b\"" Apr 30 00:59:59.302948 containerd[1427]: time="2025-04-30T00:59:59.302909548Z" level=info msg="StartContainer for \"3d4c4d103ac3af99417a99c8ea7b37215969665b9602bf607233fa04cb0dfd9b\"" Apr 30 00:59:59.332128 systemd[1]: Started cri-containerd-3d4c4d103ac3af99417a99c8ea7b37215969665b9602bf607233fa04cb0dfd9b.scope - libcontainer container 3d4c4d103ac3af99417a99c8ea7b37215969665b9602bf607233fa04cb0dfd9b. Apr 30 00:59:59.353275 containerd[1427]: time="2025-04-30T00:59:59.353230764Z" level=info msg="StartContainer for \"3d4c4d103ac3af99417a99c8ea7b37215969665b9602bf607233fa04cb0dfd9b\" returns successfully" Apr 30 00:59:59.514362 kubelet[1742]: I0430 00:59:59.514215 1742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=13.263701096 podStartE2EDuration="13.514190219s" podCreationTimestamp="2025-04-30 00:59:46 +0000 UTC" firstStartedPulling="2025-04-30 00:59:59.040543252 +0000 UTC m=+34.108816214" lastFinishedPulling="2025-04-30 00:59:59.291032375 +0000 UTC m=+34.359305337" observedRunningTime="2025-04-30 00:59:59.514165928 +0000 UTC m=+34.582439010" watchObservedRunningTime="2025-04-30 00:59:59.514190219 +0000 UTC m=+34.582463181" Apr 30 01:00:00.065133 systemd-networkd[1373]: cali5ec59c6bf6e: Gained IPv6LL Apr 30 01:00:00.207075 kubelet[1742]: E0430 01:00:00.207028 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 01:00:01.028972 update_engine[1417]: I20250430 01:00:01.028573 1417 update_attempter.cc:509] Updating boot flags... Apr 30 01:00:01.051956 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3335) Apr 30 01:00:01.077169 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3339) Apr 30 01:00:01.207872 kubelet[1742]: E0430 01:00:01.207822 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 01:00:02.209925 kubelet[1742]: E0430 01:00:02.208128 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 01:00:03.208801 kubelet[1742]: E0430 01:00:03.208756 1742 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"