Mar 17 17:46:16.887392 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:46:16.887414 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 17:46:16.887424 kernel: KASLR enabled Mar 17 17:46:16.887429 kernel: efi: EFI v2.7 by EDK II Mar 17 17:46:16.887435 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 17 17:46:16.887441 kernel: random: crng init done Mar 17 17:46:16.887447 kernel: secureboot: Secure boot disabled Mar 17 17:46:16.887453 kernel: ACPI: Early table checksum verification disabled Mar 17 17:46:16.887459 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 17 17:46:16.887466 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:46:16.887472 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:16.887478 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:16.887483 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:16.887489 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:16.887496 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:16.887504 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:16.887510 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:16.887535 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:16.887543 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:46:16.887549 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 17:46:16.887555 kernel: NUMA: Failed to initialise from firmware Mar 17 17:46:16.887561 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:46:16.887567 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 17 17:46:16.887573 kernel: Zone ranges: Mar 17 17:46:16.887579 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:46:16.887586 kernel: DMA32 empty Mar 17 17:46:16.887592 kernel: Normal empty Mar 17 17:46:16.887598 kernel: Movable zone start for each node Mar 17 17:46:16.887604 kernel: Early memory node ranges Mar 17 17:46:16.887610 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 17 17:46:16.887616 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 17 17:46:16.887622 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 17 17:46:16.887628 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 17 17:46:16.887637 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 17 17:46:16.887643 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 17 17:46:16.887649 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 17 17:46:16.887655 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 17 17:46:16.887663 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 17 17:46:16.887669 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:46:16.887675 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 17:46:16.887684 kernel: psci: probing for conduit method from ACPI. Mar 17 17:46:16.887690 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:46:16.887697 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:46:16.887705 kernel: psci: Trusted OS migration not required Mar 17 17:46:16.887711 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:46:16.887718 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:46:16.887724 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:46:16.887730 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:46:16.887737 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 17:46:16.887743 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:46:16.887749 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:46:16.887756 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:46:16.887762 kernel: CPU features: detected: Spectre-v4 Mar 17 17:46:16.887774 kernel: CPU features: detected: Spectre-BHB Mar 17 17:46:16.887781 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:46:16.887788 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:46:16.887794 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:46:16.887801 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:46:16.887807 kernel: alternatives: applying boot alternatives Mar 17 17:46:16.887814 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:46:16.887821 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:46:16.887828 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:46:16.887834 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:46:16.887840 kernel: Fallback order for Node 0: 0 Mar 17 17:46:16.887848 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 17:46:16.887855 kernel: Policy zone: DMA Mar 17 17:46:16.887861 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:46:16.887867 kernel: software IO TLB: area num 4. Mar 17 17:46:16.887874 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 17 17:46:16.887881 kernel: Memory: 2387540K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 184748K reserved, 0K cma-reserved) Mar 17 17:46:16.887887 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:46:16.887894 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:46:16.887901 kernel: rcu: RCU event tracing is enabled. Mar 17 17:46:16.887907 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:46:16.887914 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:46:16.887920 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:46:16.887929 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:46:16.887935 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:46:16.887942 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:46:16.887948 kernel: GICv3: 256 SPIs implemented Mar 17 17:46:16.887954 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:46:16.887960 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:46:16.887967 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:46:16.887973 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:46:16.887980 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:46:16.887986 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:46:16.887992 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:46:16.888000 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 17 17:46:16.888007 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 17 17:46:16.888013 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:46:16.888019 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:46:16.888026 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:46:16.888032 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:46:16.888039 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:46:16.888045 kernel: arm-pv: using stolen time PV Mar 17 17:46:16.888052 kernel: Console: colour dummy device 80x25 Mar 17 17:46:16.888058 kernel: ACPI: Core revision 20230628 Mar 17 17:46:16.888065 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:46:16.888073 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:46:16.888080 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:46:16.888086 kernel: landlock: Up and running. Mar 17 17:46:16.888093 kernel: SELinux: Initializing. Mar 17 17:46:16.888099 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:46:16.888106 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:46:16.888112 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:46:16.888119 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:46:16.888126 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:46:16.888134 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:46:16.888140 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:46:16.888147 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:46:16.888153 kernel: Remapping and enabling EFI services. Mar 17 17:46:16.888159 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:46:16.888166 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:46:16.888173 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:46:16.888179 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 17 17:46:16.888186 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:46:16.888194 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:46:16.888201 kernel: Detected PIPT I-cache on CPU2 Mar 17 17:46:16.888212 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 17:46:16.888221 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 17 17:46:16.888228 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:46:16.888234 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 17:46:16.888241 kernel: Detected PIPT I-cache on CPU3 Mar 17 17:46:16.888252 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 17:46:16.888259 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 17 17:46:16.888268 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:46:16.888274 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 17:46:16.888282 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:46:16.888288 kernel: SMP: Total of 4 processors activated. Mar 17 17:46:16.888296 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:46:16.888302 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:46:16.888309 kernel: CPU features: detected: Common not Private translations Mar 17 17:46:16.888316 kernel: CPU features: detected: CRC32 instructions Mar 17 17:46:16.888323 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:46:16.888331 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:46:16.888338 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:46:16.888345 kernel: CPU features: detected: Privileged Access Never Mar 17 17:46:16.888352 kernel: CPU features: detected: RAS Extension Support Mar 17 17:46:16.888359 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:46:16.888366 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:46:16.888373 kernel: alternatives: applying system-wide alternatives Mar 17 17:46:16.888380 kernel: devtmpfs: initialized Mar 17 17:46:16.888387 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:46:16.888396 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:46:16.888403 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:46:16.888410 kernel: SMBIOS 3.0.0 present. Mar 17 17:46:16.888416 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 17 17:46:16.888423 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:46:16.888430 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:46:16.888437 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:46:16.888444 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:46:16.888453 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:46:16.888460 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 17 17:46:16.888467 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:46:16.888473 kernel: cpuidle: using governor menu Mar 17 17:46:16.888480 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:46:16.888487 kernel: ASID allocator initialised with 32768 entries Mar 17 17:46:16.888494 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:46:16.888501 kernel: Serial: AMBA PL011 UART driver Mar 17 17:46:16.888508 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:46:16.888514 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:46:16.888577 kernel: Modules: 509280 pages in range for PLT usage Mar 17 17:46:16.888584 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:46:16.888591 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:46:16.888598 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:46:16.888605 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:46:16.888612 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:46:16.888619 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:46:16.888626 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:46:16.888633 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:46:16.888641 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:46:16.888648 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:46:16.888655 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:46:16.888662 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:46:16.888668 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:46:16.888675 kernel: ACPI: Interpreter enabled Mar 17 17:46:16.888682 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:46:16.888689 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:46:16.888696 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:46:16.888704 kernel: printk: console [ttyAMA0] enabled Mar 17 17:46:16.888711 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:46:16.888869 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:46:16.888949 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:46:16.889031 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:46:16.889099 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:46:16.889165 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:46:16.889178 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:46:16.889185 kernel: PCI host bridge to bus 0000:00 Mar 17 17:46:16.889263 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:46:16.889327 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:46:16.889387 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:46:16.889450 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:46:16.889579 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:46:16.889668 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:46:16.889739 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 17:46:16.889819 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 17:46:16.889889 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:46:16.889958 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:46:16.890028 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 17:46:16.890097 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 17:46:16.890165 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:46:16.890228 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:46:16.890292 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:46:16.890302 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:46:16.890309 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:46:16.890316 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:46:16.890323 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:46:16.890330 kernel: iommu: Default domain type: Translated Mar 17 17:46:16.890339 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:46:16.890345 kernel: efivars: Registered efivars operations Mar 17 17:46:16.890352 kernel: vgaarb: loaded Mar 17 17:46:16.890360 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:46:16.890367 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:46:16.890374 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:46:16.890381 kernel: pnp: PnP ACPI init Mar 17 17:46:16.890460 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:46:16.890472 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:46:16.890479 kernel: NET: Registered PF_INET protocol family Mar 17 17:46:16.890487 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:46:16.890494 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:46:16.890501 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:46:16.890508 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:46:16.890515 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:46:16.890539 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:46:16.890547 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:46:16.890556 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:46:16.890563 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:46:16.890570 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:46:16.890577 kernel: kvm [1]: HYP mode not available Mar 17 17:46:16.890584 kernel: Initialise system trusted keyrings Mar 17 17:46:16.890591 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:46:16.890598 kernel: Key type asymmetric registered Mar 17 17:46:16.890605 kernel: Asymmetric key parser 'x509' registered Mar 17 17:46:16.890612 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:46:16.890620 kernel: io scheduler mq-deadline registered Mar 17 17:46:16.890627 kernel: io scheduler kyber registered Mar 17 17:46:16.890634 kernel: io scheduler bfq registered Mar 17 17:46:16.890641 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:46:16.890648 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:46:16.890655 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:46:16.890731 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 17:46:16.890741 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:46:16.890748 kernel: thunder_xcv, ver 1.0 Mar 17 17:46:16.890757 kernel: thunder_bgx, ver 1.0 Mar 17 17:46:16.890764 kernel: nicpf, ver 1.0 Mar 17 17:46:16.890777 kernel: nicvf, ver 1.0 Mar 17 17:46:16.890861 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:46:16.890932 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:46:16 UTC (1742233576) Mar 17 17:46:16.890942 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:46:16.890949 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:46:16.890956 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:46:16.890966 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:46:16.890973 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:46:16.890979 kernel: Segment Routing with IPv6 Mar 17 17:46:16.890986 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:46:16.890993 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:46:16.891000 kernel: Key type dns_resolver registered Mar 17 17:46:16.891007 kernel: registered taskstats version 1 Mar 17 17:46:16.891014 kernel: Loading compiled-in X.509 certificates Mar 17 17:46:16.891021 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 17:46:16.891030 kernel: Key type .fscrypt registered Mar 17 17:46:16.891037 kernel: Key type fscrypt-provisioning registered Mar 17 17:46:16.891044 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:46:16.891051 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:46:16.891058 kernel: ima: No architecture policies found Mar 17 17:46:16.891065 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:46:16.891072 kernel: clk: Disabling unused clocks Mar 17 17:46:16.891079 kernel: Freeing unused kernel memory: 38336K Mar 17 17:46:16.891086 kernel: Run /init as init process Mar 17 17:46:16.891094 kernel: with arguments: Mar 17 17:46:16.891101 kernel: /init Mar 17 17:46:16.891108 kernel: with environment: Mar 17 17:46:16.891115 kernel: HOME=/ Mar 17 17:46:16.891122 kernel: TERM=linux Mar 17 17:46:16.891129 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:46:16.891136 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:46:16.891146 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:46:16.891156 systemd[1]: Detected virtualization kvm. Mar 17 17:46:16.891163 systemd[1]: Detected architecture arm64. Mar 17 17:46:16.891170 systemd[1]: Running in initrd. Mar 17 17:46:16.891178 systemd[1]: No hostname configured, using default hostname. Mar 17 17:46:16.891186 systemd[1]: Hostname set to . Mar 17 17:46:16.891193 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:46:16.891200 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:46:16.891208 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:46:16.891217 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:46:16.891225 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:46:16.891232 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:46:16.891240 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:46:16.891248 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:46:16.891257 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:46:16.891266 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:46:16.891274 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:46:16.891281 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:46:16.891289 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:46:16.891297 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:46:16.891304 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:46:16.891312 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:46:16.891319 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:46:16.891327 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:46:16.891336 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:46:16.891344 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:46:16.891352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:46:16.891359 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:46:16.891367 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:46:16.891374 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:46:16.891382 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:46:16.891389 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:46:16.891398 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:46:16.891405 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:46:16.891413 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:46:16.891420 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:46:16.891428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:46:16.891435 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:46:16.891443 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:46:16.891452 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:46:16.891460 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:46:16.891468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:46:16.891492 systemd-journald[238]: Collecting audit messages is disabled. Mar 17 17:46:16.891512 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:46:16.891537 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:46:16.891545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:46:16.891554 systemd-journald[238]: Journal started Mar 17 17:46:16.891574 systemd-journald[238]: Runtime Journal (/run/log/journal/e6286b39c01746c7b0692a9bd61c0514) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:46:16.883588 systemd-modules-load[239]: Inserted module 'overlay' Mar 17 17:46:16.894891 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:46:16.896566 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:46:16.899539 kernel: Bridge firewalling registered Mar 17 17:46:16.898413 systemd-modules-load[239]: Inserted module 'br_netfilter' Mar 17 17:46:16.900697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:46:16.902561 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:46:16.906127 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:46:16.914740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:46:16.915788 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:46:16.917325 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:46:16.920083 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:46:16.921102 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:46:16.923999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:46:16.933391 dracut-cmdline[274]: dracut-dracut-053 Mar 17 17:46:16.939135 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:46:16.967752 systemd-resolved[276]: Positive Trust Anchors: Mar 17 17:46:16.967776 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:46:16.967809 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:46:16.972578 systemd-resolved[276]: Defaulting to hostname 'linux'. Mar 17 17:46:16.973599 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:46:16.974809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:46:17.008550 kernel: SCSI subsystem initialized Mar 17 17:46:17.012541 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:46:17.020546 kernel: iscsi: registered transport (tcp) Mar 17 17:46:17.032656 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:46:17.032691 kernel: QLogic iSCSI HBA Driver Mar 17 17:46:17.074318 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:46:17.081733 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:46:17.099780 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:46:17.099821 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:46:17.099832 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:46:17.149551 kernel: raid6: neonx8 gen() 15763 MB/s Mar 17 17:46:17.166533 kernel: raid6: neonx4 gen() 15802 MB/s Mar 17 17:46:17.183535 kernel: raid6: neonx2 gen() 13214 MB/s Mar 17 17:46:17.200543 kernel: raid6: neonx1 gen() 10538 MB/s Mar 17 17:46:17.217542 kernel: raid6: int64x8 gen() 6791 MB/s Mar 17 17:46:17.234543 kernel: raid6: int64x4 gen() 7343 MB/s Mar 17 17:46:17.251539 kernel: raid6: int64x2 gen() 6112 MB/s Mar 17 17:46:17.268535 kernel: raid6: int64x1 gen() 5058 MB/s Mar 17 17:46:17.268548 kernel: raid6: using algorithm neonx4 gen() 15802 MB/s Mar 17 17:46:17.285553 kernel: raid6: .... xor() 12408 MB/s, rmw enabled Mar 17 17:46:17.285572 kernel: raid6: using neon recovery algorithm Mar 17 17:46:17.290532 kernel: xor: measuring software checksum speed Mar 17 17:46:17.290547 kernel: 8regs : 20977 MB/sec Mar 17 17:46:17.291955 kernel: 32regs : 20227 MB/sec Mar 17 17:46:17.291979 kernel: arm64_neon : 27851 MB/sec Mar 17 17:46:17.291996 kernel: xor: using function: arm64_neon (27851 MB/sec) Mar 17 17:46:17.342545 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:46:17.353216 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:46:17.363690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:46:17.376365 systemd-udevd[459]: Using default interface naming scheme 'v255'. Mar 17 17:46:17.380126 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:46:17.382108 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:46:17.396603 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Mar 17 17:46:17.421394 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:46:17.428741 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:46:17.471036 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:46:17.482960 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:46:17.492044 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:46:17.494422 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:46:17.495388 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:46:17.497614 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:46:17.505721 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:46:17.515435 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:46:17.538539 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 17 17:46:17.542488 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:46:17.542615 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:46:17.542627 kernel: GPT:9289727 != 19775487 Mar 17 17:46:17.542636 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:46:17.542645 kernel: GPT:9289727 != 19775487 Mar 17 17:46:17.542653 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:46:17.542662 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:46:17.539033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:46:17.539143 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:46:17.547854 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:46:17.549829 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:46:17.549979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:46:17.553873 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:46:17.562562 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (508) Mar 17 17:46:17.562598 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (518) Mar 17 17:46:17.563753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:46:17.575686 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:46:17.576772 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:46:17.589758 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:46:17.604534 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:46:17.605421 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:46:17.613390 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:46:17.626668 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:46:17.628645 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:46:17.636543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:46:17.637664 disk-uuid[550]: Primary Header is updated. Mar 17 17:46:17.637664 disk-uuid[550]: Secondary Entries is updated. Mar 17 17:46:17.637664 disk-uuid[550]: Secondary Header is updated. Mar 17 17:46:17.644345 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:46:17.652556 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:46:18.654555 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:46:18.655454 disk-uuid[559]: The operation has completed successfully. Mar 17 17:46:18.684264 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:46:18.684383 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:46:18.718660 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:46:18.721896 sh[573]: Success Mar 17 17:46:18.746544 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:46:18.786317 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:46:18.787357 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:46:18.789684 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:46:18.801210 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 17:46:18.801246 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:46:18.801257 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:46:18.801266 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:46:18.802532 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:46:18.805154 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:46:18.806216 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:46:18.806922 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:46:18.808882 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:46:18.820546 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:46:18.820582 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:46:18.821673 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:46:18.823546 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:46:18.831536 kernel: BTRFS info (device vda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:46:18.836356 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:46:18.844693 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:46:18.865093 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:46:18.902488 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:46:18.917670 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:46:18.940027 ignition[671]: Ignition 2.20.0 Mar 17 17:46:18.940039 ignition[671]: Stage: fetch-offline Mar 17 17:46:18.940074 ignition[671]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:18.940083 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:46:18.940266 ignition[671]: parsed url from cmdline: "" Mar 17 17:46:18.940269 ignition[671]: no config URL provided Mar 17 17:46:18.940274 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:46:18.940282 ignition[671]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:46:18.940305 ignition[671]: op(1): [started] loading QEMU firmware config module Mar 17 17:46:18.940310 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:46:18.949338 systemd-networkd[765]: lo: Link UP Mar 17 17:46:18.949349 systemd-networkd[765]: lo: Gained carrier Mar 17 17:46:18.949585 ignition[671]: op(1): [finished] loading QEMU firmware config module Mar 17 17:46:18.950198 systemd-networkd[765]: Enumeration completed Mar 17 17:46:18.950590 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:18.950593 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:46:18.950766 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:46:18.951979 systemd[1]: Reached target network.target - Network. Mar 17 17:46:18.952033 systemd-networkd[765]: eth0: Link UP Mar 17 17:46:18.952036 systemd-networkd[765]: eth0: Gained carrier Mar 17 17:46:18.952043 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:18.963515 ignition[671]: parsing config with SHA512: 9b319a9d5773141616fe532910348c2616d9dd4dd87210f6bfefb78f87c804d009eab1fe3b04b116df48a974d5ac6d9c0cbdde070ca3d3a4ad259a9ba97d1187 Mar 17 17:46:18.966794 unknown[671]: fetched base config from "system" Mar 17 17:46:18.966808 unknown[671]: fetched user config from "qemu" Mar 17 17:46:18.967270 ignition[671]: fetch-offline: fetch-offline passed Mar 17 17:46:18.967370 ignition[671]: Ignition finished successfully Mar 17 17:46:18.968558 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:46:18.968896 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:46:18.970808 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:46:18.981744 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:46:18.993886 ignition[773]: Ignition 2.20.0 Mar 17 17:46:18.993895 ignition[773]: Stage: kargs Mar 17 17:46:18.994052 ignition[773]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:18.994061 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:46:18.996894 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:46:18.994697 ignition[773]: kargs: kargs passed Mar 17 17:46:18.994738 ignition[773]: Ignition finished successfully Mar 17 17:46:19.006680 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:46:19.015961 ignition[782]: Ignition 2.20.0 Mar 17 17:46:19.015970 ignition[782]: Stage: disks Mar 17 17:46:19.018830 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:46:19.016116 ignition[782]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:19.020192 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:46:19.016125 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:46:19.021426 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:46:19.016754 ignition[782]: disks: disks passed Mar 17 17:46:19.023039 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:46:19.016802 ignition[782]: Ignition finished successfully Mar 17 17:46:19.024602 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:46:19.026165 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:46:19.028630 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:46:19.041253 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:46:19.045142 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:46:19.047272 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:46:19.093488 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:46:19.094864 kernel: EXT4-fs (vda9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 17:46:19.094809 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:46:19.105593 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:46:19.107231 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:46:19.108503 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:46:19.108555 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:46:19.114046 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Mar 17 17:46:19.108579 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:46:19.117025 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:46:19.117041 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:46:19.117051 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:46:19.117061 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:46:19.116279 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:46:19.119376 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:46:19.121077 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:46:19.162201 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:46:19.166080 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:46:19.169858 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:46:19.173577 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:46:19.246002 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:46:19.268146 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:46:19.271737 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:46:19.276533 kernel: BTRFS info (device vda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:46:19.289978 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:46:19.294261 ignition[917]: INFO : Ignition 2.20.0 Mar 17 17:46:19.294261 ignition[917]: INFO : Stage: mount Mar 17 17:46:19.295540 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:19.295540 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:46:19.295540 ignition[917]: INFO : mount: mount passed Mar 17 17:46:19.295540 ignition[917]: INFO : Ignition finished successfully Mar 17 17:46:19.296751 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:46:19.304739 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:46:19.866057 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:46:19.874704 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:46:19.880530 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (930) Mar 17 17:46:19.882231 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:46:19.882258 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:46:19.882783 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:46:19.884532 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:46:19.885806 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:46:19.901651 ignition[947]: INFO : Ignition 2.20.0 Mar 17 17:46:19.903287 ignition[947]: INFO : Stage: files Mar 17 17:46:19.903287 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:19.903287 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:46:19.905705 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:46:19.905705 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:46:19.905705 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:46:19.908632 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:46:19.908632 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:46:19.908632 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:46:19.908632 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:46:19.908632 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:46:19.908632 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:46:19.908632 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:46:19.908632 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:46:19.908632 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:46:19.908632 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:46:19.906782 unknown[947]: wrote ssh authorized keys file for user: core Mar 17 17:46:19.921962 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 17:46:20.072956 systemd-networkd[765]: eth0: Gained IPv6LL Mar 17 17:46:20.188662 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 17:46:20.473761 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:46:20.473761 ignition[947]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Mar 17 17:46:20.476399 ignition[947]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:46:20.476399 ignition[947]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:46:20.476399 ignition[947]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Mar 17 17:46:20.476399 ignition[947]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:46:20.491068 ignition[947]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:46:20.494283 ignition[947]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:46:20.496466 ignition[947]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:46:20.496466 ignition[947]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:46:20.496466 ignition[947]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:46:20.496466 ignition[947]: INFO : files: files passed Mar 17 17:46:20.496466 ignition[947]: INFO : Ignition finished successfully Mar 17 17:46:20.496793 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:46:20.510705 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:46:20.512941 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:46:20.514927 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:46:20.515020 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:46:20.519955 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:46:20.522505 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:46:20.522505 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:46:20.525370 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:46:20.526400 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:46:20.527553 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:46:20.543740 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:46:20.562652 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:46:20.563428 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:46:20.564782 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:46:20.565946 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:46:20.567316 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:46:20.568058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:46:20.582131 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:46:20.587676 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:46:20.594841 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:46:20.595764 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:46:20.597279 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:46:20.598681 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:46:20.598791 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:46:20.600692 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:46:20.602152 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:46:20.603324 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:46:20.604571 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:46:20.606018 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:46:20.607527 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:46:20.608893 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:46:20.610405 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:46:20.611947 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:46:20.613330 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:46:20.614504 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:46:20.614641 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:46:20.616417 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:46:20.617973 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:46:20.619359 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:46:20.622602 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:46:20.623577 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:46:20.623693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:46:20.625783 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:46:20.625908 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:46:20.627344 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:46:20.628475 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:46:20.632618 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:46:20.634649 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:46:20.635377 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:46:20.636139 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:46:20.636217 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:46:20.637542 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:46:20.637611 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:46:20.639284 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:46:20.639385 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:46:20.640908 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:46:20.641003 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:46:20.650752 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:46:20.651425 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:46:20.651559 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:46:20.656769 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:46:20.657415 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:46:20.657574 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:46:20.658924 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:46:20.659020 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:46:20.663541 ignition[1001]: INFO : Ignition 2.20.0 Mar 17 17:46:20.663541 ignition[1001]: INFO : Stage: umount Mar 17 17:46:20.665264 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:46:20.665264 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:46:20.665264 ignition[1001]: INFO : umount: umount passed Mar 17 17:46:20.665264 ignition[1001]: INFO : Ignition finished successfully Mar 17 17:46:20.665247 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:46:20.666551 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:46:20.667826 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:46:20.667902 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:46:20.670642 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:46:20.671072 systemd[1]: Stopped target network.target - Network. Mar 17 17:46:20.672114 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:46:20.672169 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:46:20.673446 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:46:20.673487 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:46:20.674959 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:46:20.675001 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:46:20.676201 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:46:20.676240 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:46:20.677589 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:46:20.678835 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:46:20.689411 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:46:20.689551 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:46:20.692918 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:46:20.693173 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:46:20.693267 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:46:20.695885 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:46:20.696472 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:46:20.696571 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:46:20.705638 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:46:20.706318 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:46:20.706375 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:46:20.707814 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:46:20.707855 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:46:20.710066 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:46:20.710108 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:46:20.711539 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:46:20.711580 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:46:20.713833 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:46:20.722842 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:46:20.723733 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:46:20.725575 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:46:20.725677 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:46:20.727179 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:46:20.727273 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:46:20.729612 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:46:20.729741 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:46:20.731338 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:46:20.731381 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:46:20.732745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:46:20.732779 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:46:20.734306 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:46:20.734365 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:46:20.736764 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:46:20.736825 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:46:20.739114 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:46:20.739163 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:46:20.750679 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:46:20.751733 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:46:20.751816 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:46:20.754417 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:46:20.754484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:46:20.759037 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:46:20.759882 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:46:20.760975 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:46:20.763245 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:46:20.772108 systemd[1]: Switching root. Mar 17 17:46:20.806448 systemd-journald[238]: Journal stopped Mar 17 17:46:21.498551 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Mar 17 17:46:21.498607 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:46:21.498619 kernel: SELinux: policy capability open_perms=1 Mar 17 17:46:21.498629 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:46:21.498642 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:46:21.498651 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:46:21.498661 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:46:21.498670 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:46:21.498679 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:46:21.498692 kernel: audit: type=1403 audit(1742233580.935:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:46:21.498702 systemd[1]: Successfully loaded SELinux policy in 30.834ms. Mar 17 17:46:21.498723 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.343ms. Mar 17 17:46:21.498734 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:46:21.498744 systemd[1]: Detected virtualization kvm. Mar 17 17:46:21.498754 systemd[1]: Detected architecture arm64. Mar 17 17:46:21.498764 systemd[1]: Detected first boot. Mar 17 17:46:21.498774 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:46:21.498785 zram_generator::config[1049]: No configuration found. Mar 17 17:46:21.498796 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:46:21.498812 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:46:21.498825 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:46:21.498837 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:46:21.498847 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:46:21.498860 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:46:21.498870 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:46:21.498880 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:46:21.498890 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:46:21.498900 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:46:21.498910 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:46:21.498920 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:46:21.498932 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:46:21.498944 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:46:21.498954 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:46:21.498964 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:46:21.498975 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:46:21.498985 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:46:21.498995 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:46:21.499005 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:46:21.499015 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:46:21.499025 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:46:21.499037 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:46:21.499047 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:46:21.499058 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:46:21.499068 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:46:21.499078 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:46:21.499088 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:46:21.499098 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:46:21.499108 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:46:21.499120 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:46:21.499131 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:46:21.499141 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:46:21.499152 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:46:21.499162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:46:21.499172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:46:21.499182 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:46:21.499193 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:46:21.499203 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:46:21.499214 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:46:21.499224 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:46:21.499234 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:46:21.499244 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:46:21.499255 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:46:21.499265 systemd[1]: Reached target machines.target - Containers. Mar 17 17:46:21.499276 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:46:21.499286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:46:21.499298 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:46:21.499308 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:46:21.499319 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:46:21.499333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:46:21.499343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:46:21.499354 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:46:21.499364 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:46:21.499374 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:46:21.499384 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:46:21.499396 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:46:21.499406 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:46:21.499416 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:46:21.499426 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:46:21.499436 kernel: loop: module loaded Mar 17 17:46:21.499446 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:46:21.499455 kernel: ACPI: bus type drm_connector registered Mar 17 17:46:21.499464 kernel: fuse: init (API version 7.39) Mar 17 17:46:21.499475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:46:21.499486 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:46:21.499496 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:46:21.499507 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:46:21.499548 systemd-journald[1128]: Collecting audit messages is disabled. Mar 17 17:46:21.499571 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:46:21.499582 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:46:21.499592 systemd-journald[1128]: Journal started Mar 17 17:46:21.499612 systemd-journald[1128]: Runtime Journal (/run/log/journal/e6286b39c01746c7b0692a9bd61c0514) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:46:21.325748 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:46:21.339352 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:46:21.339717 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:46:21.500889 systemd[1]: Stopped verity-setup.service. Mar 17 17:46:21.504820 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:46:21.505373 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:46:21.506582 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:46:21.507782 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:46:21.508847 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:46:21.510019 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:46:21.511182 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:46:21.513548 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:46:21.514907 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:46:21.516359 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:46:21.516531 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:46:21.517986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:46:21.518135 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:46:21.519505 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:46:21.521733 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:46:21.522860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:46:21.523087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:46:21.524309 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:46:21.524606 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:46:21.525799 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:46:21.526060 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:46:21.527171 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:46:21.528312 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:46:21.529570 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:46:21.530845 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:46:21.542994 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:46:21.556641 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:46:21.558436 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:46:21.559306 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:46:21.559343 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:46:21.561098 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:46:21.563001 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:46:21.564760 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:46:21.565626 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:46:21.566920 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:46:21.569267 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:46:21.570203 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:46:21.572426 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:46:21.574014 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:46:21.575722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:46:21.577348 systemd-journald[1128]: Time spent on flushing to /var/log/journal/e6286b39c01746c7b0692a9bd61c0514 is 20.801ms for 848 entries. Mar 17 17:46:21.577348 systemd-journald[1128]: System Journal (/var/log/journal/e6286b39c01746c7b0692a9bd61c0514) is 8M, max 195.6M, 187.6M free. Mar 17 17:46:21.609530 systemd-journald[1128]: Received client request to flush runtime journal. Mar 17 17:46:21.609573 kernel: loop0: detected capacity change from 0 to 113512 Mar 17 17:46:21.609599 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:46:21.579165 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:46:21.581792 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:46:21.584581 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:46:21.585666 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:46:21.586993 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:46:21.589560 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:46:21.590938 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:46:21.597637 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:46:21.608012 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:46:21.611689 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:46:21.613030 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:46:21.615164 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:46:21.622233 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:46:21.626290 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:46:21.633118 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:46:21.636588 kernel: loop1: detected capacity change from 0 to 123192 Mar 17 17:46:21.644693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:46:21.661700 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 17 17:46:21.661718 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 17 17:46:21.666097 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:46:21.668554 kernel: loop2: detected capacity change from 0 to 194096 Mar 17 17:46:21.707545 kernel: loop3: detected capacity change from 0 to 113512 Mar 17 17:46:21.712542 kernel: loop4: detected capacity change from 0 to 123192 Mar 17 17:46:21.717538 kernel: loop5: detected capacity change from 0 to 194096 Mar 17 17:46:21.721172 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:46:21.721574 (sd-merge)[1192]: Merged extensions into '/usr'. Mar 17 17:46:21.725486 systemd[1]: Reload requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:46:21.725506 systemd[1]: Reloading... Mar 17 17:46:21.776570 zram_generator::config[1220]: No configuration found. Mar 17 17:46:21.854150 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:46:21.873080 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:46:21.921747 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:46:21.921978 systemd[1]: Reloading finished in 196 ms. Mar 17 17:46:21.940547 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:46:21.941661 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:46:21.953759 systemd[1]: Starting ensure-sysext.service... Mar 17 17:46:21.955397 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:46:21.967926 systemd[1]: Reload requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:46:21.967941 systemd[1]: Reloading... Mar 17 17:46:21.971636 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:46:21.971857 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:46:21.972497 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:46:21.972795 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 17 17:46:21.972864 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 17 17:46:21.975735 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:46:21.975748 systemd-tmpfiles[1259]: Skipping /boot Mar 17 17:46:21.984948 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:46:21.984961 systemd-tmpfiles[1259]: Skipping /boot Mar 17 17:46:22.012552 zram_generator::config[1289]: No configuration found. Mar 17 17:46:22.095964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:46:22.144996 systemd[1]: Reloading finished in 176 ms. Mar 17 17:46:22.155996 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:46:22.173840 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:46:22.182097 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:46:22.184227 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:46:22.186804 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:46:22.193862 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:46:22.199606 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:46:22.203614 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:46:22.208420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:46:22.214429 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:46:22.218763 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:46:22.223686 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:46:22.224624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:46:22.224729 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:46:22.226873 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:46:22.228823 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:46:22.230435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:46:22.230604 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:46:22.232062 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:46:22.232201 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:46:22.233760 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:46:22.233916 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:46:22.242205 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:46:22.245115 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:46:22.248608 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Mar 17 17:46:22.250369 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:46:22.258795 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:46:22.261306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:46:22.265853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:46:22.266699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:46:22.266810 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:46:22.269657 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:46:22.270480 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:46:22.271809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:46:22.273557 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:46:22.274872 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:46:22.275009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:46:22.276223 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:46:22.286993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:46:22.287175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:46:22.293781 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:46:22.295264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:46:22.297680 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:46:22.301121 augenrules[1381]: No rules Mar 17 17:46:22.301584 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:46:22.302404 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:46:22.302449 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:46:22.304321 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:46:22.305111 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:46:22.305168 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:46:22.305687 systemd[1]: Finished ensure-sysext.service. Mar 17 17:46:22.306557 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:46:22.306735 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:46:22.310070 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:46:22.312081 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:46:22.316070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:46:22.317059 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:46:22.332083 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:46:22.335075 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:46:22.335249 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:46:22.336935 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:46:22.338016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:46:22.355889 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:46:22.356058 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:46:22.358671 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1375) Mar 17 17:46:22.386682 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:46:22.391866 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:46:22.423559 systemd-resolved[1328]: Positive Trust Anchors: Mar 17 17:46:22.423573 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:46:22.423606 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:46:22.430634 systemd-resolved[1328]: Defaulting to hostname 'linux'. Mar 17 17:46:22.438377 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:46:22.444758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:46:22.445803 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:46:22.463803 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:46:22.464885 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:46:22.466847 systemd-networkd[1393]: lo: Link UP Mar 17 17:46:22.466857 systemd-networkd[1393]: lo: Gained carrier Mar 17 17:46:22.467777 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:46:22.467842 systemd-networkd[1393]: Enumeration completed Mar 17 17:46:22.468230 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:22.468239 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:46:22.468879 systemd-networkd[1393]: eth0: Link UP Mar 17 17:46:22.468885 systemd-networkd[1393]: eth0: Gained carrier Mar 17 17:46:22.468898 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:46:22.470505 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:46:22.471552 systemd[1]: Reached target network.target - Network. Mar 17 17:46:22.473465 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:46:22.475278 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:46:22.476454 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:46:22.479678 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:46:22.484772 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:46:22.487614 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Mar 17 17:46:22.492126 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:46:22.492171 systemd-timesyncd[1404]: Initial clock synchronization to Mon 2025-03-17 17:46:22.200850 UTC. Mar 17 17:46:22.492946 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:46:22.497804 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:46:22.510593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:46:22.528558 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:46:22.530025 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:46:22.531144 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:46:22.532265 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:46:22.533498 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:46:22.534860 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:46:22.535995 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:46:22.537342 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:46:22.538572 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:46:22.538603 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:46:22.539452 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:46:22.541662 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:46:22.543981 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:46:22.547046 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:46:22.548477 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:46:22.549739 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:46:22.552821 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:46:22.554230 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:46:22.556414 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:46:22.558029 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:46:22.559162 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:46:22.560110 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:46:22.561069 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:46:22.561100 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:46:22.562119 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:46:22.563869 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:46:22.564441 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:46:22.567483 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:46:22.569674 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:46:22.570606 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:46:22.572679 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:46:22.576168 jq[1437]: false Mar 17 17:46:22.577705 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:46:22.581443 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:46:22.585460 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:46:22.587163 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:46:22.587703 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:46:22.588908 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:46:22.591125 dbus-daemon[1436]: [system] SELinux support is enabled Mar 17 17:46:22.591586 extend-filesystems[1438]: Found loop3 Mar 17 17:46:22.591586 extend-filesystems[1438]: Found loop4 Mar 17 17:46:22.591586 extend-filesystems[1438]: Found loop5 Mar 17 17:46:22.591586 extend-filesystems[1438]: Found vda Mar 17 17:46:22.591586 extend-filesystems[1438]: Found vda1 Mar 17 17:46:22.591586 extend-filesystems[1438]: Found vda2 Mar 17 17:46:22.591586 extend-filesystems[1438]: Found vda3 Mar 17 17:46:22.591586 extend-filesystems[1438]: Found usr Mar 17 17:46:22.591586 extend-filesystems[1438]: Found vda4 Mar 17 17:46:22.591586 extend-filesystems[1438]: Found vda6 Mar 17 17:46:22.599756 extend-filesystems[1438]: Found vda7 Mar 17 17:46:22.599756 extend-filesystems[1438]: Found vda9 Mar 17 17:46:22.599756 extend-filesystems[1438]: Checking size of /dev/vda9 Mar 17 17:46:22.591939 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:46:22.596835 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:46:22.601556 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:46:22.604600 jq[1450]: true Mar 17 17:46:22.607965 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:46:22.608153 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:46:22.608406 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:46:22.608579 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:46:22.610149 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:46:22.610335 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:46:22.620536 jq[1459]: true Mar 17 17:46:22.621024 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:46:22.621063 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:46:22.622245 extend-filesystems[1438]: Resized partition /dev/vda9 Mar 17 17:46:22.631676 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1366) Mar 17 17:46:22.622186 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:46:22.622203 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:46:22.631230 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:46:22.633636 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:46:22.639019 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:46:22.639060 update_engine[1449]: I20250317 17:46:22.637403 1449 main.cc:92] Flatcar Update Engine starting Mar 17 17:46:22.640913 update_engine[1449]: I20250317 17:46:22.640730 1449 update_check_scheduler.cc:74] Next update check in 10m41s Mar 17 17:46:22.641404 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:46:22.656697 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:46:22.663536 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:46:22.685471 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:46:22.685471 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:46:22.685471 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:46:22.691703 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Mar 17 17:46:22.688249 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:46:22.688834 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:46:22.689035 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:46:22.689471 systemd-logind[1446]: New seat seat0. Mar 17 17:46:22.693323 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:46:22.696885 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:46:22.699620 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:46:22.706607 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:46:22.727860 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:46:22.819974 containerd[1461]: time="2025-03-17T17:46:22.819879160Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:46:22.843203 containerd[1461]: time="2025-03-17T17:46:22.843157800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845607 containerd[1461]: time="2025-03-17T17:46:22.844497960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845607 containerd[1461]: time="2025-03-17T17:46:22.844542280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:46:22.845607 containerd[1461]: time="2025-03-17T17:46:22.844559840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:46:22.845607 containerd[1461]: time="2025-03-17T17:46:22.844703040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:46:22.845607 containerd[1461]: time="2025-03-17T17:46:22.844718880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845607 containerd[1461]: time="2025-03-17T17:46:22.844769560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845607 containerd[1461]: time="2025-03-17T17:46:22.844782400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845607 containerd[1461]: time="2025-03-17T17:46:22.844981000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845607 containerd[1461]: time="2025-03-17T17:46:22.844995240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845607 containerd[1461]: time="2025-03-17T17:46:22.845007920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845607 containerd[1461]: time="2025-03-17T17:46:22.845016720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845853 containerd[1461]: time="2025-03-17T17:46:22.845079560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845853 containerd[1461]: time="2025-03-17T17:46:22.845253800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845853 containerd[1461]: time="2025-03-17T17:46:22.845366800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:46:22.845853 containerd[1461]: time="2025-03-17T17:46:22.845379200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:46:22.845853 containerd[1461]: time="2025-03-17T17:46:22.845443800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:46:22.845853 containerd[1461]: time="2025-03-17T17:46:22.845480200Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:46:22.865529 containerd[1461]: time="2025-03-17T17:46:22.865497680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:46:22.865631 containerd[1461]: time="2025-03-17T17:46:22.865618440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:46:22.865747 containerd[1461]: time="2025-03-17T17:46:22.865732200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:46:22.865807 containerd[1461]: time="2025-03-17T17:46:22.865795480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:46:22.865871 containerd[1461]: time="2025-03-17T17:46:22.865858200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:46:22.866054 containerd[1461]: time="2025-03-17T17:46:22.866038360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:46:22.866407 containerd[1461]: time="2025-03-17T17:46:22.866389560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:46:22.866594 containerd[1461]: time="2025-03-17T17:46:22.866578160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:46:22.866662 containerd[1461]: time="2025-03-17T17:46:22.866650320Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:46:22.866736 containerd[1461]: time="2025-03-17T17:46:22.866722440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:46:22.866789 containerd[1461]: time="2025-03-17T17:46:22.866777360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:46:22.866864 containerd[1461]: time="2025-03-17T17:46:22.866850600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:46:22.866915 containerd[1461]: time="2025-03-17T17:46:22.866903560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:46:22.866966 containerd[1461]: time="2025-03-17T17:46:22.866954280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:46:22.867018 containerd[1461]: time="2025-03-17T17:46:22.867007160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:46:22.867069 containerd[1461]: time="2025-03-17T17:46:22.867057360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:46:22.867126 containerd[1461]: time="2025-03-17T17:46:22.867114600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:46:22.867173 containerd[1461]: time="2025-03-17T17:46:22.867162080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:46:22.867239 containerd[1461]: time="2025-03-17T17:46:22.867227080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867291 containerd[1461]: time="2025-03-17T17:46:22.867280040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867340 containerd[1461]: time="2025-03-17T17:46:22.867329480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867404 containerd[1461]: time="2025-03-17T17:46:22.867392160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867459 containerd[1461]: time="2025-03-17T17:46:22.867447000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867509 containerd[1461]: time="2025-03-17T17:46:22.867498240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867580 containerd[1461]: time="2025-03-17T17:46:22.867567120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867631 containerd[1461]: time="2025-03-17T17:46:22.867619600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867682 containerd[1461]: time="2025-03-17T17:46:22.867671000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867735 containerd[1461]: time="2025-03-17T17:46:22.867724200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867803 containerd[1461]: time="2025-03-17T17:46:22.867790320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867866 containerd[1461]: time="2025-03-17T17:46:22.867854760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867917 containerd[1461]: time="2025-03-17T17:46:22.867906840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.867968 containerd[1461]: time="2025-03-17T17:46:22.867956760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:46:22.868028 containerd[1461]: time="2025-03-17T17:46:22.868016520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.868080 containerd[1461]: time="2025-03-17T17:46:22.868068760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.868137 containerd[1461]: time="2025-03-17T17:46:22.868125360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:46:22.869004 containerd[1461]: time="2025-03-17T17:46:22.868982160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:46:22.869161 containerd[1461]: time="2025-03-17T17:46:22.869144480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:46:22.869214 containerd[1461]: time="2025-03-17T17:46:22.869202080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:46:22.869267 containerd[1461]: time="2025-03-17T17:46:22.869253840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:46:22.869312 containerd[1461]: time="2025-03-17T17:46:22.869301560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.869362 containerd[1461]: time="2025-03-17T17:46:22.869350880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:46:22.869429 containerd[1461]: time="2025-03-17T17:46:22.869417120Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:46:22.869482 containerd[1461]: time="2025-03-17T17:46:22.869471360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:46:22.869905 containerd[1461]: time="2025-03-17T17:46:22.869856720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:46:22.870056 containerd[1461]: time="2025-03-17T17:46:22.870041560Z" level=info msg="Connect containerd service" Mar 17 17:46:22.870125 containerd[1461]: time="2025-03-17T17:46:22.870113520Z" level=info msg="using legacy CRI server" Mar 17 17:46:22.870186 containerd[1461]: time="2025-03-17T17:46:22.870174280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:46:22.870449 containerd[1461]: time="2025-03-17T17:46:22.870436240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:46:22.872268 containerd[1461]: time="2025-03-17T17:46:22.872239160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:46:22.874529 containerd[1461]: time="2025-03-17T17:46:22.872434720Z" level=info msg="Start subscribing containerd event" Mar 17 17:46:22.874529 containerd[1461]: time="2025-03-17T17:46:22.872478880Z" level=info msg="Start recovering state" Mar 17 17:46:22.874529 containerd[1461]: time="2025-03-17T17:46:22.872580680Z" level=info msg="Start event monitor" Mar 17 17:46:22.874529 containerd[1461]: time="2025-03-17T17:46:22.872605280Z" level=info msg="Start snapshots syncer" Mar 17 17:46:22.874529 containerd[1461]: time="2025-03-17T17:46:22.872616480Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:46:22.874529 containerd[1461]: time="2025-03-17T17:46:22.872627840Z" level=info msg="Start streaming server" Mar 17 17:46:22.874529 containerd[1461]: time="2025-03-17T17:46:22.872907680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:46:22.874529 containerd[1461]: time="2025-03-17T17:46:22.872954440Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:46:22.874529 containerd[1461]: time="2025-03-17T17:46:22.873018160Z" level=info msg="containerd successfully booted in 0.056405s" Mar 17 17:46:22.873099 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:46:23.043025 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:46:23.060412 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:46:23.071890 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:46:23.077183 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:46:23.077370 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:46:23.079884 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:46:23.090007 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:46:23.092633 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:46:23.094567 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:46:23.095772 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:46:23.656663 systemd-networkd[1393]: eth0: Gained IPv6LL Mar 17 17:46:23.659464 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:46:23.661211 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:46:23.671761 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:46:23.674055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:23.676233 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:46:23.692214 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:46:23.692430 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:46:23.694103 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:46:23.700461 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:46:24.140562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:24.141772 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:46:24.144588 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:46:24.147204 systemd[1]: Startup finished in 518ms (kernel) + 4.244s (initrd) + 3.246s (userspace) = 8.009s. Mar 17 17:46:24.618116 kubelet[1544]: E0317 17:46:24.618010 1544 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:46:24.620732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:46:24.620860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:46:24.621213 systemd[1]: kubelet.service: Consumed 821ms CPU time, 241.1M memory peak. Mar 17 17:46:29.249793 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:46:29.250878 systemd[1]: Started sshd@0-10.0.0.103:22-10.0.0.1:59974.service - OpenSSH per-connection server daemon (10.0.0.1:59974). Mar 17 17:46:29.312898 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 59974 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:46:29.314321 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:29.321774 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:46:29.335811 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:46:29.340922 systemd-logind[1446]: New session 1 of user core. Mar 17 17:46:29.344590 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:46:29.346945 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:46:29.352768 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:46:29.354768 systemd-logind[1446]: New session c1 of user core. Mar 17 17:46:29.454988 systemd[1562]: Queued start job for default target default.target. Mar 17 17:46:29.463338 systemd[1562]: Created slice app.slice - User Application Slice. Mar 17 17:46:29.463365 systemd[1562]: Reached target paths.target - Paths. Mar 17 17:46:29.463397 systemd[1562]: Reached target timers.target - Timers. Mar 17 17:46:29.464560 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:46:29.472879 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:46:29.472933 systemd[1562]: Reached target sockets.target - Sockets. Mar 17 17:46:29.472971 systemd[1562]: Reached target basic.target - Basic System. Mar 17 17:46:29.473000 systemd[1562]: Reached target default.target - Main User Target. Mar 17 17:46:29.473027 systemd[1562]: Startup finished in 112ms. Mar 17 17:46:29.473155 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:46:29.474411 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:46:29.535639 systemd[1]: Started sshd@1-10.0.0.103:22-10.0.0.1:59982.service - OpenSSH per-connection server daemon (10.0.0.1:59982). Mar 17 17:46:29.578672 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 59982 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:46:29.579776 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:29.583152 systemd-logind[1446]: New session 2 of user core. Mar 17 17:46:29.591653 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:46:29.640650 sshd[1575]: Connection closed by 10.0.0.1 port 59982 Mar 17 17:46:29.640973 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:29.650481 systemd[1]: sshd@1-10.0.0.103:22-10.0.0.1:59982.service: Deactivated successfully. Mar 17 17:46:29.651877 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:46:29.653297 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:46:29.663787 systemd[1]: Started sshd@2-10.0.0.103:22-10.0.0.1:59992.service - OpenSSH per-connection server daemon (10.0.0.1:59992). Mar 17 17:46:29.664800 systemd-logind[1446]: Removed session 2. Mar 17 17:46:29.702468 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 59992 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:46:29.703507 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:29.707463 systemd-logind[1446]: New session 3 of user core. Mar 17 17:46:29.721653 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:46:29.768334 sshd[1583]: Connection closed by 10.0.0.1 port 59992 Mar 17 17:46:29.768204 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:29.778510 systemd[1]: sshd@2-10.0.0.103:22-10.0.0.1:59992.service: Deactivated successfully. Mar 17 17:46:29.779924 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:46:29.780572 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:46:29.782212 systemd[1]: Started sshd@3-10.0.0.103:22-10.0.0.1:60006.service - OpenSSH per-connection server daemon (10.0.0.1:60006). Mar 17 17:46:29.782958 systemd-logind[1446]: Removed session 3. Mar 17 17:46:29.824695 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 60006 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:46:29.825786 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:29.829571 systemd-logind[1446]: New session 4 of user core. Mar 17 17:46:29.837668 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:46:29.887486 sshd[1591]: Connection closed by 10.0.0.1 port 60006 Mar 17 17:46:29.887374 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:29.896448 systemd[1]: sshd@3-10.0.0.103:22-10.0.0.1:60006.service: Deactivated successfully. Mar 17 17:46:29.897870 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:46:29.898537 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:46:29.910760 systemd[1]: Started sshd@4-10.0.0.103:22-10.0.0.1:60014.service - OpenSSH per-connection server daemon (10.0.0.1:60014). Mar 17 17:46:29.911599 systemd-logind[1446]: Removed session 4. Mar 17 17:46:29.949605 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 60014 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:46:29.950630 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:29.954161 systemd-logind[1446]: New session 5 of user core. Mar 17 17:46:29.970733 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:46:30.024782 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:46:30.025060 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:46:30.040272 sudo[1600]: pam_unix(sudo:session): session closed for user root Mar 17 17:46:30.041539 sshd[1599]: Connection closed by 10.0.0.1 port 60014 Mar 17 17:46:30.042092 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:30.057661 systemd[1]: sshd@4-10.0.0.103:22-10.0.0.1:60014.service: Deactivated successfully. Mar 17 17:46:30.059902 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:46:30.060632 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:46:30.062751 systemd[1]: Started sshd@5-10.0.0.103:22-10.0.0.1:60016.service - OpenSSH per-connection server daemon (10.0.0.1:60016). Mar 17 17:46:30.063444 systemd-logind[1446]: Removed session 5. Mar 17 17:46:30.105496 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 60016 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:46:30.106864 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:30.110982 systemd-logind[1446]: New session 6 of user core. Mar 17 17:46:30.116654 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:46:30.166472 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:46:30.166776 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:46:30.169844 sudo[1610]: pam_unix(sudo:session): session closed for user root Mar 17 17:46:30.174250 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:46:30.174748 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:46:30.199799 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:46:30.221459 augenrules[1632]: No rules Mar 17 17:46:30.222587 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:46:30.223581 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:46:30.225563 sudo[1609]: pam_unix(sudo:session): session closed for user root Mar 17 17:46:30.226653 sshd[1608]: Connection closed by 10.0.0.1 port 60016 Mar 17 17:46:30.227074 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:30.242423 systemd[1]: sshd@5-10.0.0.103:22-10.0.0.1:60016.service: Deactivated successfully. Mar 17 17:46:30.244412 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:46:30.245564 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:46:30.246609 systemd[1]: Started sshd@6-10.0.0.103:22-10.0.0.1:60026.service - OpenSSH per-connection server daemon (10.0.0.1:60026). Mar 17 17:46:30.247316 systemd-logind[1446]: Removed session 6. Mar 17 17:46:30.288617 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 60026 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:46:30.289711 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:30.293779 systemd-logind[1446]: New session 7 of user core. Mar 17 17:46:30.309684 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:46:30.359266 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:46:30.359677 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:46:30.379808 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:46:30.392602 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:46:30.393631 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:46:30.849457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:30.849613 systemd[1]: kubelet.service: Consumed 821ms CPU time, 241.1M memory peak. Mar 17 17:46:30.861726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:30.876234 systemd[1]: Reload requested from client PID 1691 ('systemctl') (unit session-7.scope)... Mar 17 17:46:30.876249 systemd[1]: Reloading... Mar 17 17:46:30.946615 zram_generator::config[1737]: No configuration found. Mar 17 17:46:31.119673 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:46:31.189092 systemd[1]: Reloading finished in 312 ms. Mar 17 17:46:31.241882 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:31.244483 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:46:31.245631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:31.245692 systemd[1]: kubelet.service: Consumed 81ms CPU time, 82.4M memory peak. Mar 17 17:46:31.247450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:31.337724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:31.341821 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:46:31.379025 kubelet[1781]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:46:31.379025 kubelet[1781]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:46:31.379025 kubelet[1781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:46:31.379025 kubelet[1781]: I0317 17:46:31.378991 1781 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:46:32.133249 kubelet[1781]: I0317 17:46:32.133208 1781 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:46:32.133249 kubelet[1781]: I0317 17:46:32.133238 1781 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:46:32.133446 kubelet[1781]: I0317 17:46:32.133431 1781 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:46:32.168970 kubelet[1781]: I0317 17:46:32.168912 1781 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:46:32.180668 kubelet[1781]: I0317 17:46:32.180640 1781 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:46:32.182450 kubelet[1781]: I0317 17:46:32.182386 1781 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:46:32.182625 kubelet[1781]: I0317 17:46:32.182440 1781 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.103","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:46:32.182721 kubelet[1781]: I0317 17:46:32.182680 1781 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:46:32.182721 kubelet[1781]: I0317 17:46:32.182690 1781 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:46:32.182903 kubelet[1781]: I0317 17:46:32.182879 1781 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:46:32.184738 kubelet[1781]: I0317 17:46:32.184706 1781 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:46:32.184738 kubelet[1781]: I0317 17:46:32.184741 1781 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:46:32.184905 kubelet[1781]: I0317 17:46:32.184842 1781 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:46:32.185020 kubelet[1781]: I0317 17:46:32.184926 1781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:46:32.185330 kubelet[1781]: E0317 17:46:32.185248 1781 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:32.185330 kubelet[1781]: E0317 17:46:32.185231 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:32.186620 kubelet[1781]: I0317 17:46:32.186506 1781 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:46:32.186982 kubelet[1781]: I0317 17:46:32.186958 1781 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:46:32.187099 kubelet[1781]: W0317 17:46:32.187058 1781 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:46:32.188036 kubelet[1781]: I0317 17:46:32.187940 1781 server.go:1264] "Started kubelet" Mar 17 17:46:32.188211 kubelet[1781]: I0317 17:46:32.188074 1781 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:46:32.188211 kubelet[1781]: I0317 17:46:32.188121 1781 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:46:32.188888 kubelet[1781]: I0317 17:46:32.188477 1781 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:46:32.189486 kubelet[1781]: I0317 17:46:32.189086 1781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:46:32.189486 kubelet[1781]: I0317 17:46:32.189176 1781 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:46:32.189486 kubelet[1781]: I0317 17:46:32.189301 1781 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:46:32.190558 kubelet[1781]: I0317 17:46:32.190526 1781 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:46:32.190674 kubelet[1781]: I0317 17:46:32.190661 1781 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:46:32.191712 kubelet[1781]: E0317 17:46:32.191614 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:32.193704 kubelet[1781]: I0317 17:46:32.192911 1781 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:46:32.193704 kubelet[1781]: I0317 17:46:32.193006 1781 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:46:32.193869 kubelet[1781]: E0317 17:46:32.193827 1781 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:46:32.194886 kubelet[1781]: I0317 17:46:32.194851 1781 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:46:32.204148 kubelet[1781]: W0317 17:46:32.204096 1781 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 17:46:32.204148 kubelet[1781]: E0317 17:46:32.204133 1781 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 17:46:32.204283 kubelet[1781]: W0317 17:46:32.204181 1781 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:46:32.204283 kubelet[1781]: E0317 17:46:32.204192 1781 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:46:32.205635 kubelet[1781]: E0317 17:46:32.205545 1781 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.103\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Mar 17 17:46:32.205723 kubelet[1781]: W0317 17:46:32.205672 1781 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:46:32.205723 kubelet[1781]: E0317 17:46:32.205694 1781 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:46:32.206438 kubelet[1781]: E0317 17:46:32.204229 1781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.103.182da838aa1d0a4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.103,UID:10.0.0.103,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.103,},FirstTimestamp:2025-03-17 17:46:32.187914829 +0000 UTC m=+0.843130221,LastTimestamp:2025-03-17 17:46:32.187914829 +0000 UTC m=+0.843130221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.103,}" Mar 17 17:46:32.208200 kubelet[1781]: E0317 17:46:32.208130 1781 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.103.182da838aa7717a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.103,UID:10.0.0.103,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.103,},FirstTimestamp:2025-03-17 17:46:32.193816483 +0000 UTC m=+0.849031876,LastTimestamp:2025-03-17 17:46:32.193816483 +0000 UTC m=+0.849031876,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.103,}" Mar 17 17:46:32.210927 kubelet[1781]: I0317 17:46:32.210807 1781 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:46:32.210927 kubelet[1781]: I0317 17:46:32.210826 1781 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:46:32.210927 kubelet[1781]: I0317 17:46:32.210845 1781 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:46:32.285815 kubelet[1781]: I0317 17:46:32.285778 1781 policy_none.go:49] "None policy: Start" Mar 17 17:46:32.286704 kubelet[1781]: I0317 17:46:32.286663 1781 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:46:32.286704 kubelet[1781]: I0317 17:46:32.286690 1781 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:46:32.292174 kubelet[1781]: I0317 17:46:32.292150 1781 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.103" Mar 17 17:46:32.293858 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:46:32.297079 kubelet[1781]: I0317 17:46:32.297051 1781 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.103" Mar 17 17:46:32.304777 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:46:32.305893 kubelet[1781]: E0317 17:46:32.305849 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:32.306954 kubelet[1781]: I0317 17:46:32.306928 1781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:46:32.308605 kubelet[1781]: I0317 17:46:32.308561 1781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:46:32.308662 kubelet[1781]: I0317 17:46:32.308657 1781 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:46:32.308696 kubelet[1781]: I0317 17:46:32.308675 1781 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:46:32.308848 kubelet[1781]: E0317 17:46:32.308715 1781 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:46:32.308738 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:46:32.318904 kubelet[1781]: I0317 17:46:32.318504 1781 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:46:32.318904 kubelet[1781]: I0317 17:46:32.318735 1781 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:46:32.318904 kubelet[1781]: I0317 17:46:32.318838 1781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:46:32.319910 kubelet[1781]: E0317 17:46:32.319887 1781 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.103\" not found" Mar 17 17:46:32.356892 sudo[1644]: pam_unix(sudo:session): session closed for user root Mar 17 17:46:32.358028 sshd[1643]: Connection closed by 10.0.0.1 port 60026 Mar 17 17:46:32.358364 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:32.361780 systemd[1]: sshd@6-10.0.0.103:22-10.0.0.1:60026.service: Deactivated successfully. Mar 17 17:46:32.363682 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:46:32.363909 systemd[1]: session-7.scope: Consumed 431ms CPU time, 111.2M memory peak. Mar 17 17:46:32.364821 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:46:32.365836 systemd-logind[1446]: Removed session 7. Mar 17 17:46:32.406456 kubelet[1781]: E0317 17:46:32.406324 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:32.506808 kubelet[1781]: E0317 17:46:32.506760 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:32.607247 kubelet[1781]: E0317 17:46:32.607197 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:32.707893 kubelet[1781]: E0317 17:46:32.707790 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:32.808402 kubelet[1781]: E0317 17:46:32.808353 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:32.908968 kubelet[1781]: E0317 17:46:32.908920 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:33.009533 kubelet[1781]: E0317 17:46:33.009419 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:33.109966 kubelet[1781]: E0317 17:46:33.109918 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:33.135128 kubelet[1781]: I0317 17:46:33.135098 1781 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 17:46:33.135281 kubelet[1781]: W0317 17:46:33.135256 1781 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:46:33.186011 kubelet[1781]: E0317 17:46:33.185978 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:33.210241 kubelet[1781]: E0317 17:46:33.210201 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:33.311046 kubelet[1781]: E0317 17:46:33.311006 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:33.411478 kubelet[1781]: E0317 17:46:33.411442 1781 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Mar 17 17:46:33.512837 kubelet[1781]: I0317 17:46:33.512810 1781 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 17:46:33.513257 containerd[1461]: time="2025-03-17T17:46:33.513125453Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:46:33.513574 kubelet[1781]: I0317 17:46:33.513310 1781 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 17:46:34.186652 kubelet[1781]: I0317 17:46:34.186598 1781 apiserver.go:52] "Watching apiserver" Mar 17 17:46:34.186652 kubelet[1781]: E0317 17:46:34.186610 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:34.192133 kubelet[1781]: I0317 17:46:34.192090 1781 topology_manager.go:215] "Topology Admit Handler" podUID="525cb6d7-8b75-4824-85bd-c02669c698b0" podNamespace="calico-system" podName="calico-node-hkmgl" Mar 17 17:46:34.192208 kubelet[1781]: I0317 17:46:34.192185 1781 topology_manager.go:215] "Topology Admit Handler" podUID="7eba3d18-846c-46fc-aea9-9a59d3672cd4" podNamespace="calico-system" podName="csi-node-driver-njpv9" Mar 17 17:46:34.192277 kubelet[1781]: I0317 17:46:34.192250 1781 topology_manager.go:215] "Topology Admit Handler" podUID="a31eef41-beb8-4372-ba99-cc1aa2e244f1" podNamespace="kube-system" podName="kube-proxy-jxhbh" Mar 17 17:46:34.192627 kubelet[1781]: E0317 17:46:34.192599 1781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njpv9" podUID="7eba3d18-846c-46fc-aea9-9a59d3672cd4" Mar 17 17:46:34.200899 systemd[1]: Created slice kubepods-besteffort-pod525cb6d7_8b75_4824_85bd_c02669c698b0.slice - libcontainer container kubepods-besteffort-pod525cb6d7_8b75_4824_85bd_c02669c698b0.slice. Mar 17 17:46:34.215893 systemd[1]: Created slice kubepods-besteffort-poda31eef41_beb8_4372_ba99_cc1aa2e244f1.slice - libcontainer container kubepods-besteffort-poda31eef41_beb8_4372_ba99_cc1aa2e244f1.slice. Mar 17 17:46:34.291166 kubelet[1781]: I0317 17:46:34.291122 1781 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:46:34.301058 kubelet[1781]: I0317 17:46:34.300995 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7eba3d18-846c-46fc-aea9-9a59d3672cd4-varrun\") pod \"csi-node-driver-njpv9\" (UID: \"7eba3d18-846c-46fc-aea9-9a59d3672cd4\") " pod="calico-system/csi-node-driver-njpv9" Mar 17 17:46:34.301058 kubelet[1781]: I0317 17:46:34.301031 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/525cb6d7-8b75-4824-85bd-c02669c698b0-lib-modules\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.301058 kubelet[1781]: I0317 17:46:34.301051 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/525cb6d7-8b75-4824-85bd-c02669c698b0-tigera-ca-bundle\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.301196 kubelet[1781]: I0317 17:46:34.301069 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/525cb6d7-8b75-4824-85bd-c02669c698b0-cni-log-dir\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.301196 kubelet[1781]: I0317 17:46:34.301084 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eba3d18-846c-46fc-aea9-9a59d3672cd4-kubelet-dir\") pod \"csi-node-driver-njpv9\" (UID: \"7eba3d18-846c-46fc-aea9-9a59d3672cd4\") " pod="calico-system/csi-node-driver-njpv9" Mar 17 17:46:34.301196 kubelet[1781]: I0317 17:46:34.301102 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7eba3d18-846c-46fc-aea9-9a59d3672cd4-socket-dir\") pod \"csi-node-driver-njpv9\" (UID: \"7eba3d18-846c-46fc-aea9-9a59d3672cd4\") " pod="calico-system/csi-node-driver-njpv9" Mar 17 17:46:34.301196 kubelet[1781]: I0317 17:46:34.301120 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7eba3d18-846c-46fc-aea9-9a59d3672cd4-registration-dir\") pod \"csi-node-driver-njpv9\" (UID: \"7eba3d18-846c-46fc-aea9-9a59d3672cd4\") " pod="calico-system/csi-node-driver-njpv9" Mar 17 17:46:34.301196 kubelet[1781]: I0317 17:46:34.301148 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp5jg\" (UniqueName: \"kubernetes.io/projected/7eba3d18-846c-46fc-aea9-9a59d3672cd4-kube-api-access-lp5jg\") pod \"csi-node-driver-njpv9\" (UID: \"7eba3d18-846c-46fc-aea9-9a59d3672cd4\") " pod="calico-system/csi-node-driver-njpv9" Mar 17 17:46:34.301291 kubelet[1781]: I0317 17:46:34.301165 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a31eef41-beb8-4372-ba99-cc1aa2e244f1-kube-proxy\") pod \"kube-proxy-jxhbh\" (UID: \"a31eef41-beb8-4372-ba99-cc1aa2e244f1\") " pod="kube-system/kube-proxy-jxhbh" Mar 17 17:46:34.301291 kubelet[1781]: I0317 17:46:34.301185 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/525cb6d7-8b75-4824-85bd-c02669c698b0-xtables-lock\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.301291 kubelet[1781]: I0317 17:46:34.301202 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/525cb6d7-8b75-4824-85bd-c02669c698b0-cni-bin-dir\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.301291 kubelet[1781]: I0317 17:46:34.301240 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a31eef41-beb8-4372-ba99-cc1aa2e244f1-lib-modules\") pod \"kube-proxy-jxhbh\" (UID: \"a31eef41-beb8-4372-ba99-cc1aa2e244f1\") " pod="kube-system/kube-proxy-jxhbh" Mar 17 17:46:34.301291 kubelet[1781]: I0317 17:46:34.301282 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd5l4\" (UniqueName: \"kubernetes.io/projected/525cb6d7-8b75-4824-85bd-c02669c698b0-kube-api-access-kd5l4\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.301378 kubelet[1781]: I0317 17:46:34.301312 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a31eef41-beb8-4372-ba99-cc1aa2e244f1-xtables-lock\") pod \"kube-proxy-jxhbh\" (UID: \"a31eef41-beb8-4372-ba99-cc1aa2e244f1\") " pod="kube-system/kube-proxy-jxhbh" Mar 17 17:46:34.301378 kubelet[1781]: I0317 17:46:34.301336 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zqj7\" (UniqueName: \"kubernetes.io/projected/a31eef41-beb8-4372-ba99-cc1aa2e244f1-kube-api-access-6zqj7\") pod \"kube-proxy-jxhbh\" (UID: \"a31eef41-beb8-4372-ba99-cc1aa2e244f1\") " pod="kube-system/kube-proxy-jxhbh" Mar 17 17:46:34.301418 kubelet[1781]: I0317 17:46:34.301375 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/525cb6d7-8b75-4824-85bd-c02669c698b0-node-certs\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.301418 kubelet[1781]: I0317 17:46:34.301398 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/525cb6d7-8b75-4824-85bd-c02669c698b0-var-lib-calico\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.301418 kubelet[1781]: I0317 17:46:34.301416 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/525cb6d7-8b75-4824-85bd-c02669c698b0-cni-net-dir\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.301474 kubelet[1781]: I0317 17:46:34.301433 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/525cb6d7-8b75-4824-85bd-c02669c698b0-flexvol-driver-host\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.301474 kubelet[1781]: I0317 17:46:34.301454 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/525cb6d7-8b75-4824-85bd-c02669c698b0-policysync\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.301474 kubelet[1781]: I0317 17:46:34.301469 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/525cb6d7-8b75-4824-85bd-c02669c698b0-var-run-calico\") pod \"calico-node-hkmgl\" (UID: \"525cb6d7-8b75-4824-85bd-c02669c698b0\") " pod="calico-system/calico-node-hkmgl" Mar 17 17:46:34.403399 kubelet[1781]: E0317 17:46:34.403225 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.403399 kubelet[1781]: W0317 17:46:34.403248 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.403399 kubelet[1781]: E0317 17:46:34.403265 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.403399 kubelet[1781]: E0317 17:46:34.403400 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.403399 kubelet[1781]: W0317 17:46:34.403408 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.403581 kubelet[1781]: E0317 17:46:34.403417 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.403581 kubelet[1781]: E0317 17:46:34.403546 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.403581 kubelet[1781]: W0317 17:46:34.403555 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.403581 kubelet[1781]: E0317 17:46:34.403564 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.403835 kubelet[1781]: E0317 17:46:34.403801 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.403835 kubelet[1781]: W0317 17:46:34.403815 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.403835 kubelet[1781]: E0317 17:46:34.403827 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.404032 kubelet[1781]: E0317 17:46:34.404000 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.404032 kubelet[1781]: W0317 17:46:34.404015 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.404032 kubelet[1781]: E0317 17:46:34.404029 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.404234 kubelet[1781]: E0317 17:46:34.404212 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.404234 kubelet[1781]: W0317 17:46:34.404225 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.404283 kubelet[1781]: E0317 17:46:34.404241 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.404431 kubelet[1781]: E0317 17:46:34.404408 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.404431 kubelet[1781]: W0317 17:46:34.404419 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.404431 kubelet[1781]: E0317 17:46:34.404431 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.404760 kubelet[1781]: E0317 17:46:34.404595 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.404760 kubelet[1781]: W0317 17:46:34.404606 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.404760 kubelet[1781]: E0317 17:46:34.404617 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.404850 kubelet[1781]: E0317 17:46:34.404765 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.404850 kubelet[1781]: W0317 17:46:34.404775 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.404850 kubelet[1781]: E0317 17:46:34.404782 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.405544 kubelet[1781]: E0317 17:46:34.404970 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.405544 kubelet[1781]: W0317 17:46:34.404980 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.405544 kubelet[1781]: E0317 17:46:34.404989 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.405544 kubelet[1781]: E0317 17:46:34.405273 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.405544 kubelet[1781]: W0317 17:46:34.405288 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.405544 kubelet[1781]: E0317 17:46:34.405303 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.413237 kubelet[1781]: E0317 17:46:34.413193 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.413237 kubelet[1781]: W0317 17:46:34.413212 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.413237 kubelet[1781]: E0317 17:46:34.413231 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.413905 kubelet[1781]: E0317 17:46:34.413875 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.413905 kubelet[1781]: W0317 17:46:34.413892 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.413968 kubelet[1781]: E0317 17:46:34.413911 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.414131 kubelet[1781]: E0317 17:46:34.414118 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:34.414131 kubelet[1781]: W0317 17:46:34.414131 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:34.414183 kubelet[1781]: E0317 17:46:34.414141 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:34.515005 containerd[1461]: time="2025-03-17T17:46:34.514884225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkmgl,Uid:525cb6d7-8b75-4824-85bd-c02669c698b0,Namespace:calico-system,Attempt:0,}" Mar 17 17:46:34.518473 containerd[1461]: time="2025-03-17T17:46:34.518338872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxhbh,Uid:a31eef41-beb8-4372-ba99-cc1aa2e244f1,Namespace:kube-system,Attempt:0,}" Mar 17 17:46:35.026615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746167742.mount: Deactivated successfully. Mar 17 17:46:35.031468 containerd[1461]: time="2025-03-17T17:46:35.031428393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:46:35.032671 containerd[1461]: time="2025-03-17T17:46:35.032619594Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 17 17:46:35.033407 containerd[1461]: time="2025-03-17T17:46:35.033136179Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:46:35.033938 containerd[1461]: time="2025-03-17T17:46:35.033910817Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:46:35.034387 containerd[1461]: time="2025-03-17T17:46:35.034344851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:46:35.036211 containerd[1461]: time="2025-03-17T17:46:35.036164809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:46:35.038831 containerd[1461]: time="2025-03-17T17:46:35.038804551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.400857ms" Mar 17 17:46:35.040257 containerd[1461]: time="2025-03-17T17:46:35.040232365Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.257656ms" Mar 17 17:46:35.151922 containerd[1461]: time="2025-03-17T17:46:35.151709857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:35.151922 containerd[1461]: time="2025-03-17T17:46:35.151772792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:35.151922 containerd[1461]: time="2025-03-17T17:46:35.151787642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:35.151922 containerd[1461]: time="2025-03-17T17:46:35.151861973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:35.153660 containerd[1461]: time="2025-03-17T17:46:35.153580599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:35.153660 containerd[1461]: time="2025-03-17T17:46:35.153635235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:35.153660 containerd[1461]: time="2025-03-17T17:46:35.153646393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:35.153757 containerd[1461]: time="2025-03-17T17:46:35.153715681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:35.187017 kubelet[1781]: E0317 17:46:35.186984 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:35.248695 systemd[1]: Started cri-containerd-0e0ab2adf4747f0a8d2d07bbd77f21f22dde58b48e605cc9e647b7c906e67024.scope - libcontainer container 0e0ab2adf4747f0a8d2d07bbd77f21f22dde58b48e605cc9e647b7c906e67024. Mar 17 17:46:35.250050 systemd[1]: Started cri-containerd-f5cdbd6593983fef9ad1c2c7641cf414d874afe076e6fd778c1cce29671bd8a8.scope - libcontainer container f5cdbd6593983fef9ad1c2c7641cf414d874afe076e6fd778c1cce29671bd8a8. Mar 17 17:46:35.268443 containerd[1461]: time="2025-03-17T17:46:35.268405764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxhbh,Uid:a31eef41-beb8-4372-ba99-cc1aa2e244f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e0ab2adf4747f0a8d2d07bbd77f21f22dde58b48e605cc9e647b7c906e67024\"" Mar 17 17:46:35.271783 containerd[1461]: time="2025-03-17T17:46:35.271073658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkmgl,Uid:525cb6d7-8b75-4824-85bd-c02669c698b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5cdbd6593983fef9ad1c2c7641cf414d874afe076e6fd778c1cce29671bd8a8\"" Mar 17 17:46:35.271783 containerd[1461]: time="2025-03-17T17:46:35.271575670Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:46:36.167386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2452603001.mount: Deactivated successfully. Mar 17 17:46:36.187592 kubelet[1781]: E0317 17:46:36.187510 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:36.310230 kubelet[1781]: E0317 17:46:36.310165 1781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njpv9" podUID="7eba3d18-846c-46fc-aea9-9a59d3672cd4" Mar 17 17:46:36.364362 containerd[1461]: time="2025-03-17T17:46:36.364174292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:36.365042 containerd[1461]: time="2025-03-17T17:46:36.364871709Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771850" Mar 17 17:46:36.365715 containerd[1461]: time="2025-03-17T17:46:36.365686170Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:36.367652 containerd[1461]: time="2025-03-17T17:46:36.367619368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:36.368369 containerd[1461]: time="2025-03-17T17:46:36.368334113Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.096729686s" Mar 17 17:46:36.368369 containerd[1461]: time="2025-03-17T17:46:36.368368968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 17:46:36.369920 containerd[1461]: time="2025-03-17T17:46:36.369752951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:46:36.370839 containerd[1461]: time="2025-03-17T17:46:36.370808655Z" level=info msg="CreateContainer within sandbox \"0e0ab2adf4747f0a8d2d07bbd77f21f22dde58b48e605cc9e647b7c906e67024\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:46:36.383836 containerd[1461]: time="2025-03-17T17:46:36.383801918Z" level=info msg="CreateContainer within sandbox \"0e0ab2adf4747f0a8d2d07bbd77f21f22dde58b48e605cc9e647b7c906e67024\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a5ed67cdb7c45b9aa371c9ce632176c50f503cfb4581092c73db88e19d93e98\"" Mar 17 17:46:36.384341 containerd[1461]: time="2025-03-17T17:46:36.384292630Z" level=info msg="StartContainer for \"4a5ed67cdb7c45b9aa371c9ce632176c50f503cfb4581092c73db88e19d93e98\"" Mar 17 17:46:36.413702 systemd[1]: Started cri-containerd-4a5ed67cdb7c45b9aa371c9ce632176c50f503cfb4581092c73db88e19d93e98.scope - libcontainer container 4a5ed67cdb7c45b9aa371c9ce632176c50f503cfb4581092c73db88e19d93e98. Mar 17 17:46:36.438855 containerd[1461]: time="2025-03-17T17:46:36.438744002Z" level=info msg="StartContainer for \"4a5ed67cdb7c45b9aa371c9ce632176c50f503cfb4581092c73db88e19d93e98\" returns successfully" Mar 17 17:46:37.188396 kubelet[1781]: E0317 17:46:37.188354 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:37.330885 kubelet[1781]: I0317 17:46:37.330812 1781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jxhbh" podStartSLOduration=4.23275152 podStartE2EDuration="5.33079844s" podCreationTimestamp="2025-03-17 17:46:32 +0000 UTC" firstStartedPulling="2025-03-17 17:46:35.271185314 +0000 UTC m=+3.926400706" lastFinishedPulling="2025-03-17 17:46:36.369232234 +0000 UTC m=+5.024447626" observedRunningTime="2025-03-17 17:46:37.329971508 +0000 UTC m=+5.985186901" watchObservedRunningTime="2025-03-17 17:46:37.33079844 +0000 UTC m=+5.986013833" Mar 17 17:46:37.411342 kubelet[1781]: E0317 17:46:37.411296 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.411342 kubelet[1781]: W0317 17:46:37.411324 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.411342 kubelet[1781]: E0317 17:46:37.411342 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.411500 kubelet[1781]: E0317 17:46:37.411491 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.411500 kubelet[1781]: W0317 17:46:37.411499 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.411572 kubelet[1781]: E0317 17:46:37.411510 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.411684 kubelet[1781]: E0317 17:46:37.411659 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.411684 kubelet[1781]: W0317 17:46:37.411670 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.411684 kubelet[1781]: E0317 17:46:37.411678 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.411829 kubelet[1781]: E0317 17:46:37.411808 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.411829 kubelet[1781]: W0317 17:46:37.411823 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.411876 kubelet[1781]: E0317 17:46:37.411831 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.412008 kubelet[1781]: E0317 17:46:37.411985 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.412008 kubelet[1781]: W0317 17:46:37.411996 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.412008 kubelet[1781]: E0317 17:46:37.412004 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.412413 kubelet[1781]: E0317 17:46:37.412390 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.412413 kubelet[1781]: W0317 17:46:37.412402 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.412463 kubelet[1781]: E0317 17:46:37.412441 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.412630 kubelet[1781]: E0317 17:46:37.412609 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.412630 kubelet[1781]: W0317 17:46:37.412622 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.412671 kubelet[1781]: E0317 17:46:37.412630 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.412768 kubelet[1781]: E0317 17:46:37.412756 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.412790 kubelet[1781]: W0317 17:46:37.412770 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.412790 kubelet[1781]: E0317 17:46:37.412778 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.412928 kubelet[1781]: E0317 17:46:37.412917 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.412928 kubelet[1781]: W0317 17:46:37.412927 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.412967 kubelet[1781]: E0317 17:46:37.412934 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.413073 kubelet[1781]: E0317 17:46:37.413058 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.413094 kubelet[1781]: W0317 17:46:37.413072 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.413094 kubelet[1781]: E0317 17:46:37.413080 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.413207 kubelet[1781]: E0317 17:46:37.413197 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.413229 kubelet[1781]: W0317 17:46:37.413211 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.413229 kubelet[1781]: E0317 17:46:37.413219 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.413349 kubelet[1781]: E0317 17:46:37.413339 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.413369 kubelet[1781]: W0317 17:46:37.413353 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.413369 kubelet[1781]: E0317 17:46:37.413360 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.413500 kubelet[1781]: E0317 17:46:37.413489 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.413530 kubelet[1781]: W0317 17:46:37.413502 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.413530 kubelet[1781]: E0317 17:46:37.413511 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.413656 kubelet[1781]: E0317 17:46:37.413645 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.413683 kubelet[1781]: W0317 17:46:37.413659 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.413683 kubelet[1781]: E0317 17:46:37.413667 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.413793 kubelet[1781]: E0317 17:46:37.413783 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.413814 kubelet[1781]: W0317 17:46:37.413797 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.413814 kubelet[1781]: E0317 17:46:37.413807 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.413945 kubelet[1781]: E0317 17:46:37.413931 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.413945 kubelet[1781]: W0317 17:46:37.413944 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.413991 kubelet[1781]: E0317 17:46:37.413952 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.414091 kubelet[1781]: E0317 17:46:37.414080 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.414113 kubelet[1781]: W0317 17:46:37.414093 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.414113 kubelet[1781]: E0317 17:46:37.414101 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.414233 kubelet[1781]: E0317 17:46:37.414223 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.414253 kubelet[1781]: W0317 17:46:37.414236 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.414253 kubelet[1781]: E0317 17:46:37.414244 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.414374 kubelet[1781]: E0317 17:46:37.414365 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.414394 kubelet[1781]: W0317 17:46:37.414378 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.414415 kubelet[1781]: E0317 17:46:37.414402 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.414551 kubelet[1781]: E0317 17:46:37.414541 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.414551 kubelet[1781]: W0317 17:46:37.414551 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.414595 kubelet[1781]: E0317 17:46:37.414558 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.419967 kubelet[1781]: E0317 17:46:37.419944 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.419967 kubelet[1781]: W0317 17:46:37.419960 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.420017 kubelet[1781]: E0317 17:46:37.419974 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.420171 kubelet[1781]: E0317 17:46:37.420148 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.420171 kubelet[1781]: W0317 17:46:37.420160 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.420171 kubelet[1781]: E0317 17:46:37.420175 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.420381 kubelet[1781]: E0317 17:46:37.420355 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.420381 kubelet[1781]: W0317 17:46:37.420368 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.420381 kubelet[1781]: E0317 17:46:37.420381 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.420605 kubelet[1781]: E0317 17:46:37.420583 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.420605 kubelet[1781]: W0317 17:46:37.420596 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.420702 kubelet[1781]: E0317 17:46:37.420609 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.420759 kubelet[1781]: E0317 17:46:37.420745 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.420759 kubelet[1781]: W0317 17:46:37.420756 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.420811 kubelet[1781]: E0317 17:46:37.420768 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.420943 kubelet[1781]: E0317 17:46:37.420917 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.420943 kubelet[1781]: W0317 17:46:37.420928 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.420943 kubelet[1781]: E0317 17:46:37.420940 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.421421 kubelet[1781]: E0317 17:46:37.421310 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.421421 kubelet[1781]: W0317 17:46:37.421327 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.421421 kubelet[1781]: E0317 17:46:37.421346 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.421585 kubelet[1781]: E0317 17:46:37.421572 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.421653 kubelet[1781]: W0317 17:46:37.421640 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.421791 kubelet[1781]: E0317 17:46:37.421708 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.421907 kubelet[1781]: E0317 17:46:37.421894 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.421961 kubelet[1781]: W0317 17:46:37.421950 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.422017 kubelet[1781]: E0317 17:46:37.422006 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.422260 kubelet[1781]: E0317 17:46:37.422228 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.422260 kubelet[1781]: W0317 17:46:37.422244 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.422260 kubelet[1781]: E0317 17:46:37.422259 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.422439 kubelet[1781]: E0317 17:46:37.422425 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.422439 kubelet[1781]: W0317 17:46:37.422436 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.422480 kubelet[1781]: E0317 17:46:37.422444 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.422799 kubelet[1781]: E0317 17:46:37.422784 1781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:46:37.422799 kubelet[1781]: W0317 17:46:37.422797 1781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:46:37.422860 kubelet[1781]: E0317 17:46:37.422807 1781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:46:37.462554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856673923.mount: Deactivated successfully. Mar 17 17:46:37.540301 containerd[1461]: time="2025-03-17T17:46:37.540089262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:37.541095 containerd[1461]: time="2025-03-17T17:46:37.540890221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6490047" Mar 17 17:46:37.541877 containerd[1461]: time="2025-03-17T17:46:37.541817983Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:37.543796 containerd[1461]: time="2025-03-17T17:46:37.543745422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:37.544606 containerd[1461]: time="2025-03-17T17:46:37.544570962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.174757115s" Mar 17 17:46:37.544655 containerd[1461]: time="2025-03-17T17:46:37.544606561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 17 17:46:37.546836 containerd[1461]: time="2025-03-17T17:46:37.546805070Z" level=info msg="CreateContainer within sandbox \"f5cdbd6593983fef9ad1c2c7641cf414d874afe076e6fd778c1cce29671bd8a8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:46:37.557413 containerd[1461]: time="2025-03-17T17:46:37.557366448Z" level=info msg="CreateContainer within sandbox \"f5cdbd6593983fef9ad1c2c7641cf414d874afe076e6fd778c1cce29671bd8a8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"49bc70d53be63c3c5a9e866b06d59ec98b5998056ebd2b75e55c0ea7a0a5f7fa\"" Mar 17 17:46:37.557867 containerd[1461]: time="2025-03-17T17:46:37.557846419Z" level=info msg="StartContainer for \"49bc70d53be63c3c5a9e866b06d59ec98b5998056ebd2b75e55c0ea7a0a5f7fa\"" Mar 17 17:46:37.588724 systemd[1]: Started cri-containerd-49bc70d53be63c3c5a9e866b06d59ec98b5998056ebd2b75e55c0ea7a0a5f7fa.scope - libcontainer container 49bc70d53be63c3c5a9e866b06d59ec98b5998056ebd2b75e55c0ea7a0a5f7fa. Mar 17 17:46:37.615375 containerd[1461]: time="2025-03-17T17:46:37.614940628Z" level=info msg="StartContainer for \"49bc70d53be63c3c5a9e866b06d59ec98b5998056ebd2b75e55c0ea7a0a5f7fa\" returns successfully" Mar 17 17:46:37.640165 systemd[1]: cri-containerd-49bc70d53be63c3c5a9e866b06d59ec98b5998056ebd2b75e55c0ea7a0a5f7fa.scope: Deactivated successfully. Mar 17 17:46:37.802560 containerd[1461]: time="2025-03-17T17:46:37.802489987Z" level=info msg="shim disconnected" id=49bc70d53be63c3c5a9e866b06d59ec98b5998056ebd2b75e55c0ea7a0a5f7fa namespace=k8s.io Mar 17 17:46:37.802560 containerd[1461]: time="2025-03-17T17:46:37.802555298Z" level=warning msg="cleaning up after shim disconnected" id=49bc70d53be63c3c5a9e866b06d59ec98b5998056ebd2b75e55c0ea7a0a5f7fa namespace=k8s.io Mar 17 17:46:37.802560 containerd[1461]: time="2025-03-17T17:46:37.802564486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:38.189189 kubelet[1781]: E0317 17:46:38.189037 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:38.310182 kubelet[1781]: E0317 17:46:38.309846 1781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njpv9" podUID="7eba3d18-846c-46fc-aea9-9a59d3672cd4" Mar 17 17:46:38.324275 containerd[1461]: time="2025-03-17T17:46:38.324239636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:46:38.440687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49bc70d53be63c3c5a9e866b06d59ec98b5998056ebd2b75e55c0ea7a0a5f7fa-rootfs.mount: Deactivated successfully. Mar 17 17:46:39.189707 kubelet[1781]: E0317 17:46:39.189665 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:40.190757 kubelet[1781]: E0317 17:46:40.190718 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:40.309903 kubelet[1781]: E0317 17:46:40.309503 1781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njpv9" podUID="7eba3d18-846c-46fc-aea9-9a59d3672cd4" Mar 17 17:46:40.365036 containerd[1461]: time="2025-03-17T17:46:40.364986083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:40.365893 containerd[1461]: time="2025-03-17T17:46:40.365772634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 17 17:46:40.366546 containerd[1461]: time="2025-03-17T17:46:40.366463028Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:40.369065 containerd[1461]: time="2025-03-17T17:46:40.369022247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:40.370393 containerd[1461]: time="2025-03-17T17:46:40.370355773Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 2.046078673s" Mar 17 17:46:40.370393 containerd[1461]: time="2025-03-17T17:46:40.370387693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 17 17:46:40.372634 containerd[1461]: time="2025-03-17T17:46:40.372603170Z" level=info msg="CreateContainer within sandbox \"f5cdbd6593983fef9ad1c2c7641cf414d874afe076e6fd778c1cce29671bd8a8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:46:40.384822 containerd[1461]: time="2025-03-17T17:46:40.384722979Z" level=info msg="CreateContainer within sandbox \"f5cdbd6593983fef9ad1c2c7641cf414d874afe076e6fd778c1cce29671bd8a8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7dbd601a921a1131149cfea47e39dc19e27218258e5f8f5e344a7034458dadc7\"" Mar 17 17:46:40.385315 containerd[1461]: time="2025-03-17T17:46:40.385285735Z" level=info msg="StartContainer for \"7dbd601a921a1131149cfea47e39dc19e27218258e5f8f5e344a7034458dadc7\"" Mar 17 17:46:40.411696 systemd[1]: Started cri-containerd-7dbd601a921a1131149cfea47e39dc19e27218258e5f8f5e344a7034458dadc7.scope - libcontainer container 7dbd601a921a1131149cfea47e39dc19e27218258e5f8f5e344a7034458dadc7. Mar 17 17:46:40.434205 containerd[1461]: time="2025-03-17T17:46:40.434165183Z" level=info msg="StartContainer for \"7dbd601a921a1131149cfea47e39dc19e27218258e5f8f5e344a7034458dadc7\" returns successfully" Mar 17 17:46:40.854643 containerd[1461]: time="2025-03-17T17:46:40.854560818Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:46:40.856284 systemd[1]: cri-containerd-7dbd601a921a1131149cfea47e39dc19e27218258e5f8f5e344a7034458dadc7.scope: Deactivated successfully. Mar 17 17:46:40.856709 systemd[1]: cri-containerd-7dbd601a921a1131149cfea47e39dc19e27218258e5f8f5e344a7034458dadc7.scope: Consumed 431ms CPU time, 168.6M memory peak, 150.3M written to disk. Mar 17 17:46:40.872748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dbd601a921a1131149cfea47e39dc19e27218258e5f8f5e344a7034458dadc7-rootfs.mount: Deactivated successfully. Mar 17 17:46:40.941994 kubelet[1781]: I0317 17:46:40.941947 1781 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:46:41.094744 containerd[1461]: time="2025-03-17T17:46:41.094667143Z" level=info msg="shim disconnected" id=7dbd601a921a1131149cfea47e39dc19e27218258e5f8f5e344a7034458dadc7 namespace=k8s.io Mar 17 17:46:41.094744 containerd[1461]: time="2025-03-17T17:46:41.094719610Z" level=warning msg="cleaning up after shim disconnected" id=7dbd601a921a1131149cfea47e39dc19e27218258e5f8f5e344a7034458dadc7 namespace=k8s.io Mar 17 17:46:41.094744 containerd[1461]: time="2025-03-17T17:46:41.094733165Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:41.191191 kubelet[1781]: E0317 17:46:41.190991 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:41.331279 containerd[1461]: time="2025-03-17T17:46:41.330277015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:46:42.191323 kubelet[1781]: E0317 17:46:42.191266 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:42.317974 systemd[1]: Created slice kubepods-besteffort-pod7eba3d18_846c_46fc_aea9_9a59d3672cd4.slice - libcontainer container kubepods-besteffort-pod7eba3d18_846c_46fc_aea9_9a59d3672cd4.slice. Mar 17 17:46:42.320343 containerd[1461]: time="2025-03-17T17:46:42.319946293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njpv9,Uid:7eba3d18-846c-46fc-aea9-9a59d3672cd4,Namespace:calico-system,Attempt:0,}" Mar 17 17:46:42.451794 containerd[1461]: time="2025-03-17T17:46:42.451667142Z" level=error msg="Failed to destroy network for sandbox \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:42.452648 containerd[1461]: time="2025-03-17T17:46:42.452095584Z" level=error msg="encountered an error cleaning up failed sandbox \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:42.452648 containerd[1461]: time="2025-03-17T17:46:42.452173559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njpv9,Uid:7eba3d18-846c-46fc-aea9-9a59d3672cd4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:42.452738 kubelet[1781]: E0317 17:46:42.452375 1781 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:42.452738 kubelet[1781]: E0317 17:46:42.452440 1781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njpv9" Mar 17 17:46:42.452738 kubelet[1781]: E0317 17:46:42.452459 1781 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njpv9" Mar 17 17:46:42.452846 kubelet[1781]: E0317 17:46:42.452497 1781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-njpv9_calico-system(7eba3d18-846c-46fc-aea9-9a59d3672cd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-njpv9_calico-system(7eba3d18-846c-46fc-aea9-9a59d3672cd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-njpv9" podUID="7eba3d18-846c-46fc-aea9-9a59d3672cd4" Mar 17 17:46:42.466701 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a-shm.mount: Deactivated successfully. Mar 17 17:46:43.192026 kubelet[1781]: E0317 17:46:43.191990 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:43.333647 kubelet[1781]: I0317 17:46:43.333612 1781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a" Mar 17 17:46:43.334433 containerd[1461]: time="2025-03-17T17:46:43.334145621Z" level=info msg="StopPodSandbox for \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\"" Mar 17 17:46:43.334433 containerd[1461]: time="2025-03-17T17:46:43.334308370Z" level=info msg="Ensure that sandbox 8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a in task-service has been cleanup successfully" Mar 17 17:46:43.335790 containerd[1461]: time="2025-03-17T17:46:43.335766567Z" level=info msg="TearDown network for sandbox \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\" successfully" Mar 17 17:46:43.336090 containerd[1461]: time="2025-03-17T17:46:43.336071517Z" level=info msg="StopPodSandbox for \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\" returns successfully" Mar 17 17:46:43.336125 systemd[1]: run-netns-cni\x2d1a682864\x2ddef9\x2df629\x2d2418\x2d65ff43214898.mount: Deactivated successfully. Mar 17 17:46:43.337721 containerd[1461]: time="2025-03-17T17:46:43.337675147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njpv9,Uid:7eba3d18-846c-46fc-aea9-9a59d3672cd4,Namespace:calico-system,Attempt:1,}" Mar 17 17:46:43.404721 containerd[1461]: time="2025-03-17T17:46:43.404546330Z" level=error msg="Failed to destroy network for sandbox \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:43.405203 containerd[1461]: time="2025-03-17T17:46:43.405177177Z" level=error msg="encountered an error cleaning up failed sandbox \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:43.405348 containerd[1461]: time="2025-03-17T17:46:43.405321213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njpv9,Uid:7eba3d18-846c-46fc-aea9-9a59d3672cd4,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:43.405976 kubelet[1781]: E0317 17:46:43.405630 1781 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:43.405976 kubelet[1781]: E0317 17:46:43.405687 1781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njpv9" Mar 17 17:46:43.405976 kubelet[1781]: E0317 17:46:43.405712 1781 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njpv9" Mar 17 17:46:43.406232 kubelet[1781]: E0317 17:46:43.405746 1781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-njpv9_calico-system(7eba3d18-846c-46fc-aea9-9a59d3672cd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-njpv9_calico-system(7eba3d18-846c-46fc-aea9-9a59d3672cd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-njpv9" podUID="7eba3d18-846c-46fc-aea9-9a59d3672cd4" Mar 17 17:46:43.406160 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6-shm.mount: Deactivated successfully. Mar 17 17:46:43.490877 kubelet[1781]: I0317 17:46:43.490764 1781 topology_manager.go:215] "Topology Admit Handler" podUID="d06c9de7-87f0-4e58-abfc-40404395e830" podNamespace="default" podName="nginx-deployment-85f456d6dd-4fv79" Mar 17 17:46:43.497412 systemd[1]: Created slice kubepods-besteffort-podd06c9de7_87f0_4e58_abfc_40404395e830.slice - libcontainer container kubepods-besteffort-podd06c9de7_87f0_4e58_abfc_40404395e830.slice. Mar 17 17:46:43.560636 kubelet[1781]: I0317 17:46:43.560600 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffnxh\" (UniqueName: \"kubernetes.io/projected/d06c9de7-87f0-4e58-abfc-40404395e830-kube-api-access-ffnxh\") pod \"nginx-deployment-85f456d6dd-4fv79\" (UID: \"d06c9de7-87f0-4e58-abfc-40404395e830\") " pod="default/nginx-deployment-85f456d6dd-4fv79" Mar 17 17:46:43.800569 containerd[1461]: time="2025-03-17T17:46:43.800460889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4fv79,Uid:d06c9de7-87f0-4e58-abfc-40404395e830,Namespace:default,Attempt:0,}" Mar 17 17:46:44.052728 containerd[1461]: time="2025-03-17T17:46:44.052131061Z" level=error msg="Failed to destroy network for sandbox \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.057337 containerd[1461]: time="2025-03-17T17:46:44.057285154Z" level=error msg="encountered an error cleaning up failed sandbox \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.057599 containerd[1461]: time="2025-03-17T17:46:44.057577109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4fv79,Uid:d06c9de7-87f0-4e58-abfc-40404395e830,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.057947 kubelet[1781]: E0317 17:46:44.057914 1781 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.058157 kubelet[1781]: E0317 17:46:44.058122 1781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4fv79" Mar 17 17:46:44.058295 kubelet[1781]: E0317 17:46:44.058241 1781 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4fv79" Mar 17 17:46:44.059170 kubelet[1781]: E0317 17:46:44.058506 1781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-4fv79_default(d06c9de7-87f0-4e58-abfc-40404395e830)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-4fv79_default(d06c9de7-87f0-4e58-abfc-40404395e830)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-4fv79" podUID="d06c9de7-87f0-4e58-abfc-40404395e830" Mar 17 17:46:44.192981 kubelet[1781]: E0317 17:46:44.192918 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:44.336439 kubelet[1781]: I0317 17:46:44.335865 1781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6" Mar 17 17:46:44.336556 containerd[1461]: time="2025-03-17T17:46:44.336304240Z" level=info msg="StopPodSandbox for \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\"" Mar 17 17:46:44.336556 containerd[1461]: time="2025-03-17T17:46:44.336482406Z" level=info msg="Ensure that sandbox 0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6 in task-service has been cleanup successfully" Mar 17 17:46:44.336828 containerd[1461]: time="2025-03-17T17:46:44.336651194Z" level=info msg="TearDown network for sandbox \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\" successfully" Mar 17 17:46:44.336828 containerd[1461]: time="2025-03-17T17:46:44.336664325Z" level=info msg="StopPodSandbox for \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\" returns successfully" Mar 17 17:46:44.337529 kubelet[1781]: I0317 17:46:44.337359 1781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a" Mar 17 17:46:44.337599 containerd[1461]: time="2025-03-17T17:46:44.337371163Z" level=info msg="StopPodSandbox for \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\"" Mar 17 17:46:44.337599 containerd[1461]: time="2025-03-17T17:46:44.337453062Z" level=info msg="TearDown network for sandbox \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\" successfully" Mar 17 17:46:44.337599 containerd[1461]: time="2025-03-17T17:46:44.337465115Z" level=info msg="StopPodSandbox for \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\" returns successfully" Mar 17 17:46:44.337769 containerd[1461]: time="2025-03-17T17:46:44.337746015Z" level=info msg="StopPodSandbox for \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\"" Mar 17 17:46:44.338130 containerd[1461]: time="2025-03-17T17:46:44.337887861Z" level=info msg="Ensure that sandbox 2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a in task-service has been cleanup successfully" Mar 17 17:46:44.338130 containerd[1461]: time="2025-03-17T17:46:44.337934798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njpv9,Uid:7eba3d18-846c-46fc-aea9-9a59d3672cd4,Namespace:calico-system,Attempt:2,}" Mar 17 17:46:44.339751 containerd[1461]: time="2025-03-17T17:46:44.339713987Z" level=info msg="TearDown network for sandbox \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\" successfully" Mar 17 17:46:44.339751 containerd[1461]: time="2025-03-17T17:46:44.339741486Z" level=info msg="StopPodSandbox for \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\" returns successfully" Mar 17 17:46:44.340272 containerd[1461]: time="2025-03-17T17:46:44.340236073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4fv79,Uid:d06c9de7-87f0-4e58-abfc-40404395e830,Namespace:default,Attempt:1,}" Mar 17 17:46:44.351326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a-shm.mount: Deactivated successfully. Mar 17 17:46:44.351423 systemd[1]: run-netns-cni\x2df836d741\x2d790a\x2ddac6\x2d781e\x2d41ccd47ba2bb.mount: Deactivated successfully. Mar 17 17:46:44.351469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348213014.mount: Deactivated successfully. Mar 17 17:46:44.423774 containerd[1461]: time="2025-03-17T17:46:44.423723226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:44.429023 containerd[1461]: time="2025-03-17T17:46:44.428972270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 17 17:46:44.433328 containerd[1461]: time="2025-03-17T17:46:44.433196457Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:44.439012 containerd[1461]: time="2025-03-17T17:46:44.438976647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:44.439721 containerd[1461]: time="2025-03-17T17:46:44.439604580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 3.109203165s" Mar 17 17:46:44.439721 containerd[1461]: time="2025-03-17T17:46:44.439634314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 17 17:46:44.453370 containerd[1461]: time="2025-03-17T17:46:44.453308823Z" level=info msg="CreateContainer within sandbox \"f5cdbd6593983fef9ad1c2c7641cf414d874afe076e6fd778c1cce29671bd8a8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:46:44.473221 containerd[1461]: time="2025-03-17T17:46:44.473174095Z" level=info msg="CreateContainer within sandbox \"f5cdbd6593983fef9ad1c2c7641cf414d874afe076e6fd778c1cce29671bd8a8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5719334f62e064ad86669d65cfb3bbf411e97f61dae16d0c5e5072643ec60d25\"" Mar 17 17:46:44.474221 containerd[1461]: time="2025-03-17T17:46:44.473637351Z" level=info msg="StartContainer for \"5719334f62e064ad86669d65cfb3bbf411e97f61dae16d0c5e5072643ec60d25\"" Mar 17 17:46:44.494542 containerd[1461]: time="2025-03-17T17:46:44.494490002Z" level=error msg="Failed to destroy network for sandbox \"97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.495392 containerd[1461]: time="2025-03-17T17:46:44.495363352Z" level=error msg="encountered an error cleaning up failed sandbox \"97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.495541 containerd[1461]: time="2025-03-17T17:46:44.495502006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njpv9,Uid:7eba3d18-846c-46fc-aea9-9a59d3672cd4,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.495818 kubelet[1781]: E0317 17:46:44.495771 1781 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.495891 kubelet[1781]: E0317 17:46:44.495842 1781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njpv9" Mar 17 17:46:44.495891 kubelet[1781]: E0317 17:46:44.495861 1781 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njpv9" Mar 17 17:46:44.495935 kubelet[1781]: E0317 17:46:44.495901 1781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-njpv9_calico-system(7eba3d18-846c-46fc-aea9-9a59d3672cd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-njpv9_calico-system(7eba3d18-846c-46fc-aea9-9a59d3672cd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-njpv9" podUID="7eba3d18-846c-46fc-aea9-9a59d3672cd4" Mar 17 17:46:44.496933 containerd[1461]: time="2025-03-17T17:46:44.496718199Z" level=error msg="Failed to destroy network for sandbox \"7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.497074 containerd[1461]: time="2025-03-17T17:46:44.497039649Z" level=error msg="encountered an error cleaning up failed sandbox \"7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.497433 containerd[1461]: time="2025-03-17T17:46:44.497208755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4fv79,Uid:d06c9de7-87f0-4e58-abfc-40404395e830,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.497503 kubelet[1781]: E0317 17:46:44.497394 1781 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:46:44.497503 kubelet[1781]: E0317 17:46:44.497452 1781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4fv79" Mar 17 17:46:44.497503 kubelet[1781]: E0317 17:46:44.497468 1781 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4fv79" Mar 17 17:46:44.497619 kubelet[1781]: E0317 17:46:44.497498 1781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-4fv79_default(d06c9de7-87f0-4e58-abfc-40404395e830)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-4fv79_default(d06c9de7-87f0-4e58-abfc-40404395e830)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-4fv79" podUID="d06c9de7-87f0-4e58-abfc-40404395e830" Mar 17 17:46:44.503717 systemd[1]: Started cri-containerd-5719334f62e064ad86669d65cfb3bbf411e97f61dae16d0c5e5072643ec60d25.scope - libcontainer container 5719334f62e064ad86669d65cfb3bbf411e97f61dae16d0c5e5072643ec60d25. Mar 17 17:46:44.533314 containerd[1461]: time="2025-03-17T17:46:44.533197366Z" level=info msg="StartContainer for \"5719334f62e064ad86669d65cfb3bbf411e97f61dae16d0c5e5072643ec60d25\" returns successfully" Mar 17 17:46:44.708583 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:46:44.708700 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:46:45.194449 kubelet[1781]: E0317 17:46:45.194400 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:45.343971 kubelet[1781]: I0317 17:46:45.343943 1781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d" Mar 17 17:46:45.344939 containerd[1461]: time="2025-03-17T17:46:45.344589519Z" level=info msg="StopPodSandbox for \"97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d\"" Mar 17 17:46:45.344939 containerd[1461]: time="2025-03-17T17:46:45.344738831Z" level=info msg="Ensure that sandbox 97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d in task-service has been cleanup successfully" Mar 17 17:46:45.345245 containerd[1461]: time="2025-03-17T17:46:45.345041126Z" level=info msg="TearDown network for sandbox \"97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d\" successfully" Mar 17 17:46:45.345245 containerd[1461]: time="2025-03-17T17:46:45.345058054Z" level=info msg="StopPodSandbox for \"97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d\" returns successfully" Mar 17 17:46:45.345385 containerd[1461]: time="2025-03-17T17:46:45.345265653Z" level=info msg="StopPodSandbox for \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\"" Mar 17 17:46:45.345462 containerd[1461]: time="2025-03-17T17:46:45.345431652Z" level=info msg="TearDown network for sandbox \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\" successfully" Mar 17 17:46:45.345486 containerd[1461]: time="2025-03-17T17:46:45.345460955Z" level=info msg="StopPodSandbox for \"0899bf92203aa21f37c51a69731dd38e07e4677ffb92f03d5ae1ac85bf658ba6\" returns successfully" Mar 17 17:46:45.345970 containerd[1461]: time="2025-03-17T17:46:45.345833715Z" level=info msg="StopPodSandbox for \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\"" Mar 17 17:46:45.345970 containerd[1461]: time="2025-03-17T17:46:45.345933562Z" level=info msg="TearDown network for sandbox \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\" successfully" Mar 17 17:46:45.345970 containerd[1461]: time="2025-03-17T17:46:45.345945299Z" level=info msg="StopPodSandbox for \"8b7f9600ea7568bd95992f0415ca8082ffde713745f40166555a659096b2d15a\" returns successfully" Mar 17 17:46:45.346742 containerd[1461]: time="2025-03-17T17:46:45.346714532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njpv9,Uid:7eba3d18-846c-46fc-aea9-9a59d3672cd4,Namespace:calico-system,Attempt:3,}" Mar 17 17:46:45.347238 kubelet[1781]: I0317 17:46:45.346914 1781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46" Mar 17 17:46:45.347684 containerd[1461]: time="2025-03-17T17:46:45.347400008Z" level=info msg="StopPodSandbox for \"7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46\"" Mar 17 17:46:45.347684 containerd[1461]: time="2025-03-17T17:46:45.347571516Z" level=info msg="Ensure that sandbox 7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46 in task-service has been cleanup successfully" Mar 17 17:46:45.347916 containerd[1461]: time="2025-03-17T17:46:45.347724900Z" level=info msg="TearDown network for sandbox \"7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46\" successfully" Mar 17 17:46:45.347916 containerd[1461]: time="2025-03-17T17:46:45.347737555Z" level=info msg="StopPodSandbox for \"7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46\" returns successfully" Mar 17 17:46:45.348668 containerd[1461]: time="2025-03-17T17:46:45.348646119Z" level=info msg="StopPodSandbox for \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\"" Mar 17 17:46:45.348737 containerd[1461]: time="2025-03-17T17:46:45.348719777Z" level=info msg="TearDown network for sandbox \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\" successfully" Mar 17 17:46:45.348737 containerd[1461]: time="2025-03-17T17:46:45.348732153Z" level=info msg="StopPodSandbox for \"2c59b4cf8d4c78ac3def36860b06d140c2b33db12641939dd1d06668b3991f9a\" returns successfully" Mar 17 17:46:45.349109 containerd[1461]: time="2025-03-17T17:46:45.349086748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4fv79,Uid:d06c9de7-87f0-4e58-abfc-40404395e830,Namespace:default,Attempt:2,}" Mar 17 17:46:45.352441 systemd[1]: run-netns-cni\x2d80d54c74\x2d7cac\x2dd215\x2d5568\x2d787e6879f614.mount: Deactivated successfully. Mar 17 17:46:45.352561 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97fd980d012f1753824fa16f0e40b473ccf740224692550e6aeb60ecee92b76d-shm.mount: Deactivated successfully. Mar 17 17:46:45.352619 systemd[1]: run-netns-cni\x2d6a6c36d4\x2d15a0\x2d915b\x2dd1a2\x2dfad226024c82.mount: Deactivated successfully. Mar 17 17:46:45.352663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bc84b47267c3e68f09e1ef5a76f19e227091a55ea9e2bcc6d1c92502170df46-shm.mount: Deactivated successfully. Mar 17 17:46:45.524170 systemd-networkd[1393]: calid8b0dbb3061: Link UP Mar 17 17:46:45.524357 systemd-networkd[1393]: calid8b0dbb3061: Gained carrier Mar 17 17:46:45.533545 kubelet[1781]: I0317 17:46:45.533384 1781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hkmgl" podStartSLOduration=4.363301531 podStartE2EDuration="13.533363296s" podCreationTimestamp="2025-03-17 17:46:32 +0000 UTC" firstStartedPulling="2025-03-17 17:46:35.272616185 +0000 UTC m=+3.927831578" lastFinishedPulling="2025-03-17 17:46:44.44267799 +0000 UTC m=+13.097893343" observedRunningTime="2025-03-17 17:46:45.357972695 +0000 UTC m=+14.013188088" watchObservedRunningTime="2025-03-17 17:46:45.533363296 +0000 UTC m=+14.188578689" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.387 [INFO][2564] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.408 [INFO][2564] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.103-k8s-csi--node--driver--njpv9-eth0 csi-node-driver- calico-system 7eba3d18-846c-46fc-aea9-9a59d3672cd4 729 0 2025-03-17 17:46:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.103 csi-node-driver-njpv9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid8b0dbb3061 [] []}} ContainerID="0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" Namespace="calico-system" Pod="csi-node-driver-njpv9" WorkloadEndpoint="10.0.0.103-k8s-csi--node--driver--njpv9-" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.408 [INFO][2564] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" Namespace="calico-system" Pod="csi-node-driver-njpv9" WorkloadEndpoint="10.0.0.103-k8s-csi--node--driver--njpv9-eth0" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.482 [INFO][2597] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" HandleID="k8s-pod-network.0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" Workload="10.0.0.103-k8s-csi--node--driver--njpv9-eth0" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.493 [INFO][2597] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" HandleID="k8s-pod-network.0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" Workload="10.0.0.103-k8s-csi--node--driver--njpv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.103", "pod":"csi-node-driver-njpv9", "timestamp":"2025-03-17 17:46:45.48281267 +0000 UTC"}, Hostname:"10.0.0.103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.493 [INFO][2597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.493 [INFO][2597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.493 [INFO][2597] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.103' Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.495 [INFO][2597] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" host="10.0.0.103" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.499 [INFO][2597] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.103" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.503 [INFO][2597] ipam/ipam.go 489: Trying affinity for 192.168.126.0/26 host="10.0.0.103" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.505 [INFO][2597] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="10.0.0.103" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.507 [INFO][2597] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="10.0.0.103" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.507 [INFO][2597] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" host="10.0.0.103" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.508 [INFO][2597] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37 Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.511 [INFO][2597] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" host="10.0.0.103" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.516 [INFO][2597] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.1/26] block=192.168.126.0/26 handle="k8s-pod-network.0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" host="10.0.0.103" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.516 [INFO][2597] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.1/26] handle="k8s-pod-network.0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" host="10.0.0.103" Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.516 [INFO][2597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:46:45.533982 containerd[1461]: 2025-03-17 17:46:45.516 [INFO][2597] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.1/26] IPv6=[] ContainerID="0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" HandleID="k8s-pod-network.0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" Workload="10.0.0.103-k8s-csi--node--driver--njpv9-eth0" Mar 17 17:46:45.534549 containerd[1461]: 2025-03-17 17:46:45.518 [INFO][2564] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" Namespace="calico-system" Pod="csi-node-driver-njpv9" WorkloadEndpoint="10.0.0.103-k8s-csi--node--driver--njpv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.103-k8s-csi--node--driver--njpv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7eba3d18-846c-46fc-aea9-9a59d3672cd4", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 46, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.103", ContainerID:"", Pod:"csi-node-driver-njpv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid8b0dbb3061", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:46:45.534549 containerd[1461]: 2025-03-17 17:46:45.518 [INFO][2564] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.1/32] ContainerID="0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" Namespace="calico-system" Pod="csi-node-driver-njpv9" WorkloadEndpoint="10.0.0.103-k8s-csi--node--driver--njpv9-eth0" Mar 17 17:46:45.534549 containerd[1461]: 2025-03-17 17:46:45.518 [INFO][2564] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8b0dbb3061 ContainerID="0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" Namespace="calico-system" Pod="csi-node-driver-njpv9" WorkloadEndpoint="10.0.0.103-k8s-csi--node--driver--njpv9-eth0" Mar 17 17:46:45.534549 containerd[1461]: 2025-03-17 17:46:45.524 [INFO][2564] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" Namespace="calico-system" Pod="csi-node-driver-njpv9" WorkloadEndpoint="10.0.0.103-k8s-csi--node--driver--njpv9-eth0" Mar 17 17:46:45.534549 containerd[1461]: 2025-03-17 17:46:45.525 [INFO][2564] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" Namespace="calico-system" Pod="csi-node-driver-njpv9" WorkloadEndpoint="10.0.0.103-k8s-csi--node--driver--njpv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.103-k8s-csi--node--driver--njpv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7eba3d18-846c-46fc-aea9-9a59d3672cd4", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 46, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.103", ContainerID:"0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37", Pod:"csi-node-driver-njpv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid8b0dbb3061", MAC:"52:a5:61:2e:40:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:46:45.534549 containerd[1461]: 2025-03-17 17:46:45.532 [INFO][2564] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37" Namespace="calico-system" Pod="csi-node-driver-njpv9" WorkloadEndpoint="10.0.0.103-k8s-csi--node--driver--njpv9-eth0" Mar 17 17:46:45.548544 systemd-networkd[1393]: cali89fb7499aa6: Link UP Mar 17 17:46:45.548757 systemd-networkd[1393]: cali89fb7499aa6: Gained carrier Mar 17 17:46:45.549960 containerd[1461]: time="2025-03-17T17:46:45.549896025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:45.549960 containerd[1461]: time="2025-03-17T17:46:45.549947525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:45.550109 containerd[1461]: time="2025-03-17T17:46:45.549962976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:45.550109 containerd[1461]: time="2025-03-17T17:46:45.550039588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.393 [INFO][2580] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.408 [INFO][2580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0 nginx-deployment-85f456d6dd- default d06c9de7-87f0-4e58-abfc-40404395e830 917 0 2025-03-17 17:46:43 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.103 nginx-deployment-85f456d6dd-4fv79 eth0 default [] [] [kns.default ksa.default.default] cali89fb7499aa6 [] []}} ContainerID="08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-4fv79" WorkloadEndpoint="10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.409 [INFO][2580] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-4fv79" WorkloadEndpoint="10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.482 [INFO][2599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" HandleID="k8s-pod-network.08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" Workload="10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.494 [INFO][2599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" HandleID="k8s-pod-network.08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" Workload="10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000189ca0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.103", "pod":"nginx-deployment-85f456d6dd-4fv79", "timestamp":"2025-03-17 17:46:45.482813468 +0000 UTC"}, Hostname:"10.0.0.103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.494 [INFO][2599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.516 [INFO][2599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.516 [INFO][2599] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.103' Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.518 [INFO][2599] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" host="10.0.0.103" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.522 [INFO][2599] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.103" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.527 [INFO][2599] ipam/ipam.go 489: Trying affinity for 192.168.126.0/26 host="10.0.0.103" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.529 [INFO][2599] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="10.0.0.103" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.532 [INFO][2599] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="10.0.0.103" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.532 [INFO][2599] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" host="10.0.0.103" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.534 [INFO][2599] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.538 [INFO][2599] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" host="10.0.0.103" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.543 [INFO][2599] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.2/26] block=192.168.126.0/26 handle="k8s-pod-network.08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" host="10.0.0.103" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.543 [INFO][2599] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.2/26] handle="k8s-pod-network.08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" host="10.0.0.103" Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.543 [INFO][2599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:46:45.558152 containerd[1461]: 2025-03-17 17:46:45.543 [INFO][2599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.2/26] IPv6=[] ContainerID="08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" HandleID="k8s-pod-network.08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" Workload="10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0" Mar 17 17:46:45.558864 containerd[1461]: 2025-03-17 17:46:45.545 [INFO][2580] cni-plugin/k8s.go 386: Populated endpoint ContainerID="08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-4fv79" WorkloadEndpoint="10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"d06c9de7-87f0-4e58-abfc-40404395e830", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 46, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.103", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-4fv79", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali89fb7499aa6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:46:45.558864 containerd[1461]: 2025-03-17 17:46:45.545 [INFO][2580] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.2/32] ContainerID="08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-4fv79" WorkloadEndpoint="10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0" Mar 17 17:46:45.558864 containerd[1461]: 2025-03-17 17:46:45.545 [INFO][2580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89fb7499aa6 ContainerID="08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-4fv79" WorkloadEndpoint="10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0" Mar 17 17:46:45.558864 containerd[1461]: 2025-03-17 17:46:45.548 [INFO][2580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-4fv79" WorkloadEndpoint="10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0" Mar 17 17:46:45.558864 containerd[1461]: 2025-03-17 17:46:45.548 [INFO][2580] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-4fv79" WorkloadEndpoint="10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"d06c9de7-87f0-4e58-abfc-40404395e830", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 46, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.103", ContainerID:"08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed", Pod:"nginx-deployment-85f456d6dd-4fv79", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali89fb7499aa6", MAC:"ee:64:65:a2:12:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:46:45.558864 containerd[1461]: 2025-03-17 17:46:45.556 [INFO][2580] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-4fv79" WorkloadEndpoint="10.0.0.103-k8s-nginx--deployment--85f456d6dd--4fv79-eth0" Mar 17 17:46:45.568688 systemd[1]: Started cri-containerd-0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37.scope - libcontainer container 0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37. Mar 17 17:46:45.575274 containerd[1461]: time="2025-03-17T17:46:45.575193256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:45.575274 containerd[1461]: time="2025-03-17T17:46:45.575235734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:45.575274 containerd[1461]: time="2025-03-17T17:46:45.575246473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:45.575798 containerd[1461]: time="2025-03-17T17:46:45.575510962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:45.577833 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:46:45.594697 systemd[1]: Started cri-containerd-08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed.scope - libcontainer container 08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed. Mar 17 17:46:45.595488 containerd[1461]: time="2025-03-17T17:46:45.595390423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njpv9,Uid:7eba3d18-846c-46fc-aea9-9a59d3672cd4,Namespace:calico-system,Attempt:3,} returns sandbox id \"0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37\"" Mar 17 17:46:45.596868 containerd[1461]: time="2025-03-17T17:46:45.596755664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:46:45.605864 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:46:45.620955 containerd[1461]: time="2025-03-17T17:46:45.620922200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4fv79,Uid:d06c9de7-87f0-4e58-abfc-40404395e830,Namespace:default,Attempt:2,} returns sandbox id \"08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed\"" Mar 17 17:46:46.073585 kernel: bpftool[2845]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:46:46.195365 kubelet[1781]: E0317 17:46:46.195308 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:46.228862 systemd-networkd[1393]: vxlan.calico: Link UP Mar 17 17:46:46.228867 systemd-networkd[1393]: vxlan.calico: Gained carrier Mar 17 17:46:46.365762 systemd[1]: run-containerd-runc-k8s.io-5719334f62e064ad86669d65cfb3bbf411e97f61dae16d0c5e5072643ec60d25-runc.GQnUWe.mount: Deactivated successfully. Mar 17 17:46:46.612322 containerd[1461]: time="2025-03-17T17:46:46.612262491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:46.612803 containerd[1461]: time="2025-03-17T17:46:46.612752902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 17 17:46:46.613600 containerd[1461]: time="2025-03-17T17:46:46.613569401Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:46.616030 containerd[1461]: time="2025-03-17T17:46:46.615949617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:46.616618 containerd[1461]: time="2025-03-17T17:46:46.616585662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.019793746s" Mar 17 17:46:46.616656 containerd[1461]: time="2025-03-17T17:46:46.616616011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 17 17:46:46.617909 containerd[1461]: time="2025-03-17T17:46:46.617882549Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:46:46.618620 containerd[1461]: time="2025-03-17T17:46:46.618580410Z" level=info msg="CreateContainer within sandbox \"0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:46:46.629030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660308597.mount: Deactivated successfully. Mar 17 17:46:46.632083 containerd[1461]: time="2025-03-17T17:46:46.632009666Z" level=info msg="CreateContainer within sandbox \"0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"13938c2775869d45a12f7df5228687558084d534362ce09ab68bd05f992a6794\"" Mar 17 17:46:46.632551 containerd[1461]: time="2025-03-17T17:46:46.632393777Z" level=info msg="StartContainer for \"13938c2775869d45a12f7df5228687558084d534362ce09ab68bd05f992a6794\"" Mar 17 17:46:46.663699 systemd[1]: Started cri-containerd-13938c2775869d45a12f7df5228687558084d534362ce09ab68bd05f992a6794.scope - libcontainer container 13938c2775869d45a12f7df5228687558084d534362ce09ab68bd05f992a6794. Mar 17 17:46:46.690345 containerd[1461]: time="2025-03-17T17:46:46.689584929Z" level=info msg="StartContainer for \"13938c2775869d45a12f7df5228687558084d534362ce09ab68bd05f992a6794\" returns successfully" Mar 17 17:46:47.195870 kubelet[1781]: E0317 17:46:47.195830 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:47.400661 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Mar 17 17:46:47.528641 systemd-networkd[1393]: calid8b0dbb3061: Gained IPv6LL Mar 17 17:46:47.592624 systemd-networkd[1393]: cali89fb7499aa6: Gained IPv6LL Mar 17 17:46:48.196782 kubelet[1781]: E0317 17:46:48.196739 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:48.445405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729485495.mount: Deactivated successfully. Mar 17 17:46:49.197431 kubelet[1781]: E0317 17:46:49.197363 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:49.325908 containerd[1461]: time="2025-03-17T17:46:49.325863981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:49.326786 containerd[1461]: time="2025-03-17T17:46:49.326389786Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69703867" Mar 17 17:46:49.328036 containerd[1461]: time="2025-03-17T17:46:49.327508479Z" level=info msg="ImageCreate event name:\"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:49.330369 containerd[1461]: time="2025-03-17T17:46:49.330341393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:49.331675 containerd[1461]: time="2025-03-17T17:46:49.331364075Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"69703745\" in 2.713455405s" Mar 17 17:46:49.331675 containerd[1461]: time="2025-03-17T17:46:49.331394840Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 17:46:49.333204 containerd[1461]: time="2025-03-17T17:46:49.333132034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:46:49.333873 containerd[1461]: time="2025-03-17T17:46:49.333849781Z" level=info msg="CreateContainer within sandbox \"08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 17:46:49.347455 containerd[1461]: time="2025-03-17T17:46:49.347360048Z" level=info msg="CreateContainer within sandbox \"08ccd74f713ccc46a234de5bd01767c21f7c1c8089fb8a1b99fab435ae9a39ed\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b7b5138b45bbace966e26e9c45b18197d47a59b0b4a176691ea7525e8cd06ebe\"" Mar 17 17:46:49.351962 containerd[1461]: time="2025-03-17T17:46:49.351912256Z" level=info msg="StartContainer for \"b7b5138b45bbace966e26e9c45b18197d47a59b0b4a176691ea7525e8cd06ebe\"" Mar 17 17:46:49.443720 systemd[1]: Started cri-containerd-b7b5138b45bbace966e26e9c45b18197d47a59b0b4a176691ea7525e8cd06ebe.scope - libcontainer container b7b5138b45bbace966e26e9c45b18197d47a59b0b4a176691ea7525e8cd06ebe. Mar 17 17:46:49.481659 containerd[1461]: time="2025-03-17T17:46:49.481553948Z" level=info msg="StartContainer for \"b7b5138b45bbace966e26e9c45b18197d47a59b0b4a176691ea7525e8cd06ebe\" returns successfully" Mar 17 17:46:50.197901 kubelet[1781]: E0317 17:46:50.197860 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:50.386672 kubelet[1781]: I0317 17:46:50.386547 1781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-4fv79" podStartSLOduration=3.676056393 podStartE2EDuration="7.386513731s" podCreationTimestamp="2025-03-17 17:46:43 +0000 UTC" firstStartedPulling="2025-03-17 17:46:45.621971971 +0000 UTC m=+14.277187324" lastFinishedPulling="2025-03-17 17:46:49.332429269 +0000 UTC m=+17.987644662" observedRunningTime="2025-03-17 17:46:50.386327796 +0000 UTC m=+19.041543149" watchObservedRunningTime="2025-03-17 17:46:50.386513731 +0000 UTC m=+19.041729124" Mar 17 17:46:50.526602 containerd[1461]: time="2025-03-17T17:46:50.526461020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:50.527558 containerd[1461]: time="2025-03-17T17:46:50.527499631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 17 17:46:50.528312 containerd[1461]: time="2025-03-17T17:46:50.528274104Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:50.530921 containerd[1461]: time="2025-03-17T17:46:50.530881722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:50.531405 containerd[1461]: time="2025-03-17T17:46:50.531326202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.198163921s" Mar 17 17:46:50.531405 containerd[1461]: time="2025-03-17T17:46:50.531399569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 17 17:46:50.534420 containerd[1461]: time="2025-03-17T17:46:50.533464165Z" level=info msg="CreateContainer within sandbox \"0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:46:50.544920 containerd[1461]: time="2025-03-17T17:46:50.544848011Z" level=info msg="CreateContainer within sandbox \"0ddef019b7f0c20156b9b38786de545743657c4f26249c59635c159e74d54a37\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3a77e1ed047cf0265cc9140d62ed036df7050afb1cdf02f4d5c8ab1947ca7c9f\"" Mar 17 17:46:50.545550 containerd[1461]: time="2025-03-17T17:46:50.545271552Z" level=info msg="StartContainer for \"3a77e1ed047cf0265cc9140d62ed036df7050afb1cdf02f4d5c8ab1947ca7c9f\"" Mar 17 17:46:50.585726 systemd[1]: Started cri-containerd-3a77e1ed047cf0265cc9140d62ed036df7050afb1cdf02f4d5c8ab1947ca7c9f.scope - libcontainer container 3a77e1ed047cf0265cc9140d62ed036df7050afb1cdf02f4d5c8ab1947ca7c9f. Mar 17 17:46:50.620012 containerd[1461]: time="2025-03-17T17:46:50.619972534Z" level=info msg="StartContainer for \"3a77e1ed047cf0265cc9140d62ed036df7050afb1cdf02f4d5c8ab1947ca7c9f\" returns successfully" Mar 17 17:46:51.198696 kubelet[1781]: E0317 17:46:51.198652 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:51.341111 kubelet[1781]: I0317 17:46:51.341073 1781 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:46:51.341111 kubelet[1781]: I0317 17:46:51.341110 1781 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:46:51.385146 kubelet[1781]: I0317 17:46:51.385080 1781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-njpv9" podStartSLOduration=14.449365994 podStartE2EDuration="19.38506493s" podCreationTimestamp="2025-03-17 17:46:32 +0000 UTC" firstStartedPulling="2025-03-17 17:46:45.596495926 +0000 UTC m=+14.251711279" lastFinishedPulling="2025-03-17 17:46:50.532194822 +0000 UTC m=+19.187410215" observedRunningTime="2025-03-17 17:46:51.384391353 +0000 UTC m=+20.039606746" watchObservedRunningTime="2025-03-17 17:46:51.38506493 +0000 UTC m=+20.040280323" Mar 17 17:46:52.184994 kubelet[1781]: E0317 17:46:52.184908 1781 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:52.199235 kubelet[1781]: E0317 17:46:52.199198 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:53.199631 kubelet[1781]: E0317 17:46:53.199573 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:54.200333 kubelet[1781]: E0317 17:46:54.200287 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:55.200724 kubelet[1781]: E0317 17:46:55.200670 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:56.073088 kubelet[1781]: I0317 17:46:56.073044 1781 topology_manager.go:215] "Topology Admit Handler" podUID="4c76d5dd-a1bf-45a8-84d3-ff2eb3ce8836" podNamespace="default" podName="nfs-server-provisioner-0" Mar 17 17:46:56.077741 systemd[1]: Created slice kubepods-besteffort-pod4c76d5dd_a1bf_45a8_84d3_ff2eb3ce8836.slice - libcontainer container kubepods-besteffort-pod4c76d5dd_a1bf_45a8_84d3_ff2eb3ce8836.slice. Mar 17 17:46:56.201776 kubelet[1781]: E0317 17:46:56.201713 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:56.218981 kubelet[1781]: I0317 17:46:56.218946 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4c76d5dd-a1bf-45a8-84d3-ff2eb3ce8836-data\") pod \"nfs-server-provisioner-0\" (UID: \"4c76d5dd-a1bf-45a8-84d3-ff2eb3ce8836\") " pod="default/nfs-server-provisioner-0" Mar 17 17:46:56.218981 kubelet[1781]: I0317 17:46:56.218984 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nwp6\" (UniqueName: \"kubernetes.io/projected/4c76d5dd-a1bf-45a8-84d3-ff2eb3ce8836-kube-api-access-7nwp6\") pod \"nfs-server-provisioner-0\" (UID: \"4c76d5dd-a1bf-45a8-84d3-ff2eb3ce8836\") " pod="default/nfs-server-provisioner-0" Mar 17 17:46:56.381174 containerd[1461]: time="2025-03-17T17:46:56.381059860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4c76d5dd-a1bf-45a8-84d3-ff2eb3ce8836,Namespace:default,Attempt:0,}" Mar 17 17:46:56.494403 systemd-networkd[1393]: cali60e51b789ff: Link UP Mar 17 17:46:56.494645 systemd-networkd[1393]: cali60e51b789ff: Gained carrier Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.430 [INFO][3127] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.103-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 4c76d5dd-a1bf-45a8-84d3-ff2eb3ce8836 1025 0 2025-03-17 17:46:56 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.103 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.103-k8s-nfs--server--provisioner--0-" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.430 [INFO][3127] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.103-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.455 [INFO][3145] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" HandleID="k8s-pod-network.154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" Workload="10.0.0.103-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.467 [INFO][3145] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" HandleID="k8s-pod-network.154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" Workload="10.0.0.103-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000304150), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.103", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-17 17:46:56.455691716 +0000 UTC"}, Hostname:"10.0.0.103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.467 [INFO][3145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.467 [INFO][3145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.467 [INFO][3145] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.103' Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.469 [INFO][3145] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" host="10.0.0.103" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.472 [INFO][3145] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.103" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.476 [INFO][3145] ipam/ipam.go 489: Trying affinity for 192.168.126.0/26 host="10.0.0.103" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.478 [INFO][3145] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="10.0.0.103" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.480 [INFO][3145] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="10.0.0.103" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.480 [INFO][3145] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" host="10.0.0.103" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.482 [INFO][3145] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.486 [INFO][3145] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" host="10.0.0.103" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.491 [INFO][3145] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.3/26] block=192.168.126.0/26 handle="k8s-pod-network.154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" host="10.0.0.103" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.491 [INFO][3145] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.3/26] handle="k8s-pod-network.154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" host="10.0.0.103" Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.491 [INFO][3145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:46:56.506514 containerd[1461]: 2025-03-17 17:46:56.491 [INFO][3145] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.3/26] IPv6=[] ContainerID="154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" HandleID="k8s-pod-network.154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" Workload="10.0.0.103-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:46:56.507291 containerd[1461]: 2025-03-17 17:46:56.492 [INFO][3127] cni-plugin/k8s.go 386: Populated endpoint ContainerID="154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.103-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.103-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"4c76d5dd-a1bf-45a8-84d3-ff2eb3ce8836", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.103", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:46:56.507291 containerd[1461]: 2025-03-17 17:46:56.493 [INFO][3127] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.3/32] ContainerID="154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.103-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:46:56.507291 containerd[1461]: 2025-03-17 17:46:56.493 [INFO][3127] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.103-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:46:56.507291 containerd[1461]: 2025-03-17 17:46:56.494 [INFO][3127] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.103-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:46:56.507567 containerd[1461]: 2025-03-17 17:46:56.494 [INFO][3127] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.103-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.103-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"4c76d5dd-a1bf-45a8-84d3-ff2eb3ce8836", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.103", ContainerID:"154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"7e:d9:a1:48:04:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:46:56.507567 containerd[1461]: 2025-03-17 17:46:56.505 [INFO][3127] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.103-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:46:56.525200 containerd[1461]: time="2025-03-17T17:46:56.525112329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:56.525200 containerd[1461]: time="2025-03-17T17:46:56.525165616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:56.525200 containerd[1461]: time="2025-03-17T17:46:56.525180818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:56.525781 containerd[1461]: time="2025-03-17T17:46:56.525733286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:56.549749 systemd[1]: Started cri-containerd-154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d.scope - libcontainer container 154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d. Mar 17 17:46:56.558361 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:46:56.634444 containerd[1461]: time="2025-03-17T17:46:56.634349659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4c76d5dd-a1bf-45a8-84d3-ff2eb3ce8836,Namespace:default,Attempt:0,} returns sandbox id \"154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d\"" Mar 17 17:46:56.636446 containerd[1461]: time="2025-03-17T17:46:56.636417475Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 17:46:57.202363 kubelet[1781]: E0317 17:46:57.202324 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:58.134603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907881038.mount: Deactivated successfully. Mar 17 17:46:58.203473 kubelet[1781]: E0317 17:46:58.203435 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:58.410026 systemd-networkd[1393]: cali60e51b789ff: Gained IPv6LL Mar 17 17:46:59.204372 kubelet[1781]: E0317 17:46:59.204309 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:46:59.434799 containerd[1461]: time="2025-03-17T17:46:59.434543198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:59.435531 containerd[1461]: time="2025-03-17T17:46:59.435471175Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Mar 17 17:46:59.436483 containerd[1461]: time="2025-03-17T17:46:59.436427795Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:59.439892 containerd[1461]: time="2025-03-17T17:46:59.439844311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:59.443534 containerd[1461]: time="2025-03-17T17:46:59.440609431Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 2.804156072s" Mar 17 17:46:59.443534 containerd[1461]: time="2025-03-17T17:46:59.440788570Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Mar 17 17:46:59.445896 containerd[1461]: time="2025-03-17T17:46:59.445862699Z" level=info msg="CreateContainer within sandbox \"154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 17:46:59.454514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184001563.mount: Deactivated successfully. Mar 17 17:46:59.457653 containerd[1461]: time="2025-03-17T17:46:59.457612285Z" level=info msg="CreateContainer within sandbox \"154ebe7808af6a11073e9c28531001d2a856dbd66ac810e0ca7789ea25a8982d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3fd10d03548bcd274d1ce6232fb8bde21a9970f6e586354c2e4bafd6edfb4064\"" Mar 17 17:46:59.458325 containerd[1461]: time="2025-03-17T17:46:59.458164382Z" level=info msg="StartContainer for \"3fd10d03548bcd274d1ce6232fb8bde21a9970f6e586354c2e4bafd6edfb4064\"" Mar 17 17:46:59.493675 systemd[1]: Started cri-containerd-3fd10d03548bcd274d1ce6232fb8bde21a9970f6e586354c2e4bafd6edfb4064.scope - libcontainer container 3fd10d03548bcd274d1ce6232fb8bde21a9970f6e586354c2e4bafd6edfb4064. Mar 17 17:46:59.519206 containerd[1461]: time="2025-03-17T17:46:59.519160025Z" level=info msg="StartContainer for \"3fd10d03548bcd274d1ce6232fb8bde21a9970f6e586354c2e4bafd6edfb4064\" returns successfully" Mar 17 17:47:00.205036 kubelet[1781]: E0317 17:47:00.204980 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:00.403440 kubelet[1781]: I0317 17:47:00.403361 1781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.595357803 podStartE2EDuration="4.403347362s" podCreationTimestamp="2025-03-17 17:46:56 +0000 UTC" firstStartedPulling="2025-03-17 17:46:56.636068552 +0000 UTC m=+25.291283945" lastFinishedPulling="2025-03-17 17:46:59.444058151 +0000 UTC m=+28.099273504" observedRunningTime="2025-03-17 17:47:00.402266015 +0000 UTC m=+29.057481408" watchObservedRunningTime="2025-03-17 17:47:00.403347362 +0000 UTC m=+29.058562755" Mar 17 17:47:01.206048 kubelet[1781]: E0317 17:47:01.205998 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:02.206435 kubelet[1781]: E0317 17:47:02.206377 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:03.207115 kubelet[1781]: E0317 17:47:03.207070 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:04.207697 kubelet[1781]: E0317 17:47:04.207632 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:05.207841 kubelet[1781]: E0317 17:47:05.207761 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:06.208189 kubelet[1781]: E0317 17:47:06.208150 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:07.208784 kubelet[1781]: E0317 17:47:07.208750 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:08.142510 update_engine[1449]: I20250317 17:47:08.141937 1449 update_attempter.cc:509] Updating boot flags... Mar 17 17:47:08.174600 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3332) Mar 17 17:47:08.205552 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3332) Mar 17 17:47:08.209620 kubelet[1781]: E0317 17:47:08.209586 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:08.237617 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3332) Mar 17 17:47:08.937140 kubelet[1781]: I0317 17:47:08.937088 1781 topology_manager.go:215] "Topology Admit Handler" podUID="ebfc8d50-dc6e-4331-a63f-767693b3c08d" podNamespace="default" podName="test-pod-1" Mar 17 17:47:08.942548 systemd[1]: Created slice kubepods-besteffort-podebfc8d50_dc6e_4331_a63f_767693b3c08d.slice - libcontainer container kubepods-besteffort-podebfc8d50_dc6e_4331_a63f_767693b3c08d.slice. Mar 17 17:47:09.082872 kubelet[1781]: I0317 17:47:09.082790 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlqgb\" (UniqueName: \"kubernetes.io/projected/ebfc8d50-dc6e-4331-a63f-767693b3c08d-kube-api-access-tlqgb\") pod \"test-pod-1\" (UID: \"ebfc8d50-dc6e-4331-a63f-767693b3c08d\") " pod="default/test-pod-1" Mar 17 17:47:09.082872 kubelet[1781]: I0317 17:47:09.082853 1781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-561a85b9-8a58-47ba-a1b0-fa25a1d967fa\" (UniqueName: \"kubernetes.io/nfs/ebfc8d50-dc6e-4331-a63f-767693b3c08d-pvc-561a85b9-8a58-47ba-a1b0-fa25a1d967fa\") pod \"test-pod-1\" (UID: \"ebfc8d50-dc6e-4331-a63f-767693b3c08d\") " pod="default/test-pod-1" Mar 17 17:47:09.206550 kernel: FS-Cache: Loaded Mar 17 17:47:09.209747 kubelet[1781]: E0317 17:47:09.209712 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:09.228799 kernel: RPC: Registered named UNIX socket transport module. Mar 17 17:47:09.228864 kernel: RPC: Registered udp transport module. Mar 17 17:47:09.228909 kernel: RPC: Registered tcp transport module. Mar 17 17:47:09.229623 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 17:47:09.229658 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 17:47:09.389821 kernel: NFS: Registering the id_resolver key type Mar 17 17:47:09.389933 kernel: Key type id_resolver registered Mar 17 17:47:09.389948 kernel: Key type id_legacy registered Mar 17 17:47:09.418443 nfsidmap[3354]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 17:47:09.420211 nfsidmap[3355]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 17:47:09.545736 containerd[1461]: time="2025-03-17T17:47:09.545691918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ebfc8d50-dc6e-4331-a63f-767693b3c08d,Namespace:default,Attempt:0,}" Mar 17 17:47:09.651088 systemd-networkd[1393]: cali5ec59c6bf6e: Link UP Mar 17 17:47:09.651226 systemd-networkd[1393]: cali5ec59c6bf6e: Gained carrier Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.585 [INFO][3356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.103-k8s-test--pod--1-eth0 default ebfc8d50-dc6e-4331-a63f-767693b3c08d 1093 0 2025-03-17 17:46:56 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.103 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.103-k8s-test--pod--1-" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.585 [INFO][3356] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.103-k8s-test--pod--1-eth0" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.609 [INFO][3371] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" HandleID="k8s-pod-network.6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" Workload="10.0.0.103-k8s-test--pod--1-eth0" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.621 [INFO][3371] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" HandleID="k8s-pod-network.6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" Workload="10.0.0.103-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000432e60), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.103", "pod":"test-pod-1", "timestamp":"2025-03-17 17:47:09.609642379 +0000 UTC"}, Hostname:"10.0.0.103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.621 [INFO][3371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.621 [INFO][3371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.621 [INFO][3371] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.103' Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.623 [INFO][3371] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" host="10.0.0.103" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.627 [INFO][3371] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.103" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.632 [INFO][3371] ipam/ipam.go 489: Trying affinity for 192.168.126.0/26 host="10.0.0.103" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.634 [INFO][3371] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="10.0.0.103" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.636 [INFO][3371] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="10.0.0.103" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.636 [INFO][3371] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" host="10.0.0.103" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.637 [INFO][3371] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5 Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.642 [INFO][3371] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" host="10.0.0.103" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.647 [INFO][3371] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.4/26] block=192.168.126.0/26 handle="k8s-pod-network.6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" host="10.0.0.103" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.647 [INFO][3371] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.4/26] handle="k8s-pod-network.6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" host="10.0.0.103" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.647 [INFO][3371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.647 [INFO][3371] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.4/26] IPv6=[] ContainerID="6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" HandleID="k8s-pod-network.6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" Workload="10.0.0.103-k8s-test--pod--1-eth0" Mar 17 17:47:09.658529 containerd[1461]: 2025-03-17 17:47:09.648 [INFO][3356] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.103-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.103-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"ebfc8d50-dc6e-4331-a63f-767693b3c08d", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.103", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:47:09.659215 containerd[1461]: 2025-03-17 17:47:09.648 [INFO][3356] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.4/32] ContainerID="6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.103-k8s-test--pod--1-eth0" Mar 17 17:47:09.659215 containerd[1461]: 2025-03-17 17:47:09.649 [INFO][3356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.103-k8s-test--pod--1-eth0" Mar 17 17:47:09.659215 containerd[1461]: 2025-03-17 17:47:09.650 [INFO][3356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.103-k8s-test--pod--1-eth0" Mar 17 17:47:09.659215 containerd[1461]: 2025-03-17 17:47:09.650 [INFO][3356] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.103-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.103-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"ebfc8d50-dc6e-4331-a63f-767693b3c08d", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 46, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.103", ContainerID:"6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"8a:bc:6e:82:93:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:47:09.659215 containerd[1461]: 2025-03-17 17:47:09.657 [INFO][3356] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.103-k8s-test--pod--1-eth0" Mar 17 17:47:09.715284 containerd[1461]: time="2025-03-17T17:47:09.715017032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:47:09.715284 containerd[1461]: time="2025-03-17T17:47:09.715069115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:47:09.715284 containerd[1461]: time="2025-03-17T17:47:09.715099157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:47:09.715284 containerd[1461]: time="2025-03-17T17:47:09.715205483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:47:09.731672 systemd[1]: Started cri-containerd-6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5.scope - libcontainer container 6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5. Mar 17 17:47:09.740665 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:47:09.754706 containerd[1461]: time="2025-03-17T17:47:09.754661515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ebfc8d50-dc6e-4331-a63f-767693b3c08d,Namespace:default,Attempt:0,} returns sandbox id \"6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5\"" Mar 17 17:47:09.756249 containerd[1461]: time="2025-03-17T17:47:09.756211850Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:47:10.052654 containerd[1461]: time="2025-03-17T17:47:10.052594762Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:47:10.053368 containerd[1461]: time="2025-03-17T17:47:10.053308204Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 17:47:10.056308 containerd[1461]: time="2025-03-17T17:47:10.056257977Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"69703745\" in 300.009884ms" Mar 17 17:47:10.056308 containerd[1461]: time="2025-03-17T17:47:10.056299139Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 17:47:10.058359 containerd[1461]: time="2025-03-17T17:47:10.058330818Z" level=info msg="CreateContainer within sandbox \"6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 17:47:10.070877 containerd[1461]: time="2025-03-17T17:47:10.070826672Z" level=info msg="CreateContainer within sandbox \"6358487e92e955ae4ac01d691259dd0e351b1697b98b2b28e1516494a3f0f6b5\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3cfca068106fd56032695d5cdfe66f861ff4e59bad1cccbbba9303f1cbdad363\"" Mar 17 17:47:10.071251 containerd[1461]: time="2025-03-17T17:47:10.071222135Z" level=info msg="StartContainer for \"3cfca068106fd56032695d5cdfe66f861ff4e59bad1cccbbba9303f1cbdad363\"" Mar 17 17:47:10.100693 systemd[1]: Started cri-containerd-3cfca068106fd56032695d5cdfe66f861ff4e59bad1cccbbba9303f1cbdad363.scope - libcontainer container 3cfca068106fd56032695d5cdfe66f861ff4e59bad1cccbbba9303f1cbdad363. Mar 17 17:47:10.123407 containerd[1461]: time="2025-03-17T17:47:10.122412219Z" level=info msg="StartContainer for \"3cfca068106fd56032695d5cdfe66f861ff4e59bad1cccbbba9303f1cbdad363\" returns successfully" Mar 17 17:47:10.210456 kubelet[1781]: E0317 17:47:10.210415 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:10.952763 systemd-networkd[1393]: cali5ec59c6bf6e: Gained IPv6LL Mar 17 17:47:11.211457 kubelet[1781]: E0317 17:47:11.211357 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:12.185875 kubelet[1781]: E0317 17:47:12.185820 1781 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:12.212393 kubelet[1781]: E0317 17:47:12.212359 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:13.212669 kubelet[1781]: E0317 17:47:13.212620 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:47:14.213182 kubelet[1781]: E0317 17:47:14.213136 1781 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"