Feb 13 19:12:22.036257 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:12:22.036281 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:12:22.036291 kernel: KASLR enabled Feb 13 19:12:22.036297 kernel: efi: EFI v2.7 by EDK II Feb 13 19:12:22.036303 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 19:12:22.036309 kernel: random: crng init done Feb 13 19:12:22.036316 kernel: secureboot: Secure boot disabled Feb 13 19:12:22.036322 kernel: ACPI: Early table checksum verification disabled Feb 13 19:12:22.036328 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:12:22.036335 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:12:22.036342 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:12:22.036348 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:12:22.036354 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:12:22.036360 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:12:22.036368 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:12:22.036376 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:12:22.036383 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:12:22.036389 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:12:22.036395 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:12:22.036402 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:12:22.036408 kernel: NUMA: Failed to initialise from firmware Feb 13 19:12:22.036415 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:12:22.036422 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:12:22.036428 kernel: Zone ranges: Feb 13 19:12:22.036434 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:12:22.036442 kernel: DMA32 empty Feb 13 19:12:22.036448 kernel: Normal empty Feb 13 19:12:22.036455 kernel: Movable zone start for each node Feb 13 19:12:22.036461 kernel: Early memory node ranges Feb 13 19:12:22.036467 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 19:12:22.036474 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 19:12:22.036480 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 19:12:22.036486 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:12:22.036493 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:12:22.036499 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:12:22.036506 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:12:22.036512 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:12:22.036520 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:12:22.036527 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:12:22.036533 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:12:22.036542 kernel: psci: probing for conduit method from ACPI. Feb 13 19:12:22.036549 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:12:22.036556 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:12:22.036564 kernel: psci: Trusted OS migration not required Feb 13 19:12:22.036571 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:12:22.036578 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:12:22.036585 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:12:22.036592 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:12:22.036599 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:12:22.036606 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:12:22.036613 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:12:22.036619 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:12:22.036630 kernel: CPU features: detected: Spectre-v4 Feb 13 19:12:22.036638 kernel: CPU features: detected: Spectre-BHB Feb 13 19:12:22.036645 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:12:22.036652 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:12:22.036659 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:12:22.036665 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:12:22.036672 kernel: alternatives: applying boot alternatives Feb 13 19:12:22.036680 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:12:22.036687 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:12:22.036694 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:12:22.036701 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:12:22.036708 kernel: Fallback order for Node 0: 0 Feb 13 19:12:22.036716 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:12:22.036723 kernel: Policy zone: DMA Feb 13 19:12:22.036734 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:12:22.036742 kernel: software IO TLB: area num 4. Feb 13 19:12:22.036751 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:12:22.036760 kernel: Memory: 2387540K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184748K reserved, 0K cma-reserved) Feb 13 19:12:22.036767 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:12:22.036774 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:12:22.036782 kernel: rcu: RCU event tracing is enabled. Feb 13 19:12:22.036789 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:12:22.036796 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:12:22.036803 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:12:22.036812 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:12:22.036832 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:12:22.036840 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:12:22.036846 kernel: GICv3: 256 SPIs implemented Feb 13 19:12:22.036853 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:12:22.036860 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:12:22.036869 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:12:22.036876 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:12:22.036883 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:12:22.036890 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:12:22.036897 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:12:22.036906 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:12:22.036913 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:12:22.036921 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:12:22.036927 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:12:22.036935 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:12:22.036941 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:12:22.036948 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:12:22.036955 kernel: arm-pv: using stolen time PV Feb 13 19:12:22.036962 kernel: Console: colour dummy device 80x25 Feb 13 19:12:22.036969 kernel: ACPI: Core revision 20230628 Feb 13 19:12:22.036977 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:12:22.036985 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:12:22.036992 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:12:22.036999 kernel: landlock: Up and running. Feb 13 19:12:22.037007 kernel: SELinux: Initializing. Feb 13 19:12:22.037015 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:12:22.037022 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:12:22.037029 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:12:22.037037 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:12:22.037044 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:12:22.037055 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:12:22.037062 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:12:22.037069 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:12:22.037076 kernel: Remapping and enabling EFI services. Feb 13 19:12:22.037083 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:12:22.037090 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:12:22.037097 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:12:22.037105 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:12:22.037112 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:12:22.037120 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:12:22.037128 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:12:22.037140 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:12:22.037149 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:12:22.037157 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:12:22.037164 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:12:22.037171 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:12:22.037179 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:12:22.037186 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:12:22.037195 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:12:22.037202 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:12:22.037210 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:12:22.037217 kernel: SMP: Total of 4 processors activated. Feb 13 19:12:22.037224 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:12:22.037231 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:12:22.037239 kernel: CPU features: detected: Common not Private translations Feb 13 19:12:22.037246 kernel: CPU features: detected: CRC32 instructions Feb 13 19:12:22.037255 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:12:22.037262 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:12:22.037269 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:12:22.037277 kernel: CPU features: detected: Privileged Access Never Feb 13 19:12:22.037284 kernel: CPU features: detected: RAS Extension Support Feb 13 19:12:22.037291 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:12:22.037298 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:12:22.037306 kernel: alternatives: applying system-wide alternatives Feb 13 19:12:22.037313 kernel: devtmpfs: initialized Feb 13 19:12:22.037320 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:12:22.037329 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:12:22.037336 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:12:22.037343 kernel: SMBIOS 3.0.0 present. Feb 13 19:12:22.037350 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:12:22.037357 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:12:22.037365 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:12:22.037372 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:12:22.037380 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:12:22.037389 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:12:22.037396 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Feb 13 19:12:22.037403 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:12:22.037410 kernel: cpuidle: using governor menu Feb 13 19:12:22.037418 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:12:22.037425 kernel: ASID allocator initialised with 32768 entries Feb 13 19:12:22.037433 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:12:22.037440 kernel: Serial: AMBA PL011 UART driver Feb 13 19:12:22.037447 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:12:22.037456 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:12:22.037463 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:12:22.037470 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:12:22.037477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:12:22.037485 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:12:22.037492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:12:22.037499 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:12:22.037506 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:12:22.037513 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:12:22.037523 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:12:22.037530 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:12:22.037537 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:12:22.037544 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:12:22.037551 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:12:22.037558 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:12:22.037566 kernel: ACPI: Interpreter enabled Feb 13 19:12:22.037573 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:12:22.037580 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:12:22.037588 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:12:22.037596 kernel: printk: console [ttyAMA0] enabled Feb 13 19:12:22.037604 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:12:22.037742 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:12:22.037850 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:12:22.037932 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:12:22.038001 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:12:22.038064 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:12:22.038078 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:12:22.038085 kernel: PCI host bridge to bus 0000:00 Feb 13 19:12:22.038159 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:12:22.038222 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:12:22.038280 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:12:22.038338 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:12:22.038419 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:12:22.038512 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:12:22.038583 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:12:22.038665 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:12:22.038733 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:12:22.038800 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:12:22.038892 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:12:22.038967 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:12:22.039030 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:12:22.039090 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:12:22.039149 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:12:22.039159 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:12:22.039166 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:12:22.039173 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:12:22.039181 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:12:22.039190 kernel: iommu: Default domain type: Translated Feb 13 19:12:22.039197 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:12:22.039205 kernel: efivars: Registered efivars operations Feb 13 19:12:22.039212 kernel: vgaarb: loaded Feb 13 19:12:22.039219 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:12:22.039227 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:12:22.039234 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:12:22.039241 kernel: pnp: PnP ACPI init Feb 13 19:12:22.039319 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:12:22.039332 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:12:22.039339 kernel: NET: Registered PF_INET protocol family Feb 13 19:12:22.039346 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:12:22.039354 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:12:22.039361 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:12:22.039368 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:12:22.039376 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:12:22.039383 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:12:22.039392 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:12:22.039399 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:12:22.039407 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:12:22.039414 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:12:22.039421 kernel: kvm [1]: HYP mode not available Feb 13 19:12:22.039428 kernel: Initialise system trusted keyrings Feb 13 19:12:22.039436 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:12:22.039443 kernel: Key type asymmetric registered Feb 13 19:12:22.039451 kernel: Asymmetric key parser 'x509' registered Feb 13 19:12:22.039458 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:12:22.039467 kernel: io scheduler mq-deadline registered Feb 13 19:12:22.039475 kernel: io scheduler kyber registered Feb 13 19:12:22.039482 kernel: io scheduler bfq registered Feb 13 19:12:22.039490 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:12:22.039497 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:12:22.039505 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:12:22.039573 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:12:22.039583 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:12:22.039590 kernel: thunder_xcv, ver 1.0 Feb 13 19:12:22.039601 kernel: thunder_bgx, ver 1.0 Feb 13 19:12:22.039610 kernel: nicpf, ver 1.0 Feb 13 19:12:22.039618 kernel: nicvf, ver 1.0 Feb 13 19:12:22.039691 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:12:22.039755 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:12:21 UTC (1739473941) Feb 13 19:12:22.039765 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:12:22.039772 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:12:22.039780 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:12:22.039789 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:12:22.039796 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:12:22.039803 kernel: Segment Routing with IPv6 Feb 13 19:12:22.039810 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:12:22.039832 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:12:22.039840 kernel: Key type dns_resolver registered Feb 13 19:12:22.039847 kernel: registered taskstats version 1 Feb 13 19:12:22.039854 kernel: Loading compiled-in X.509 certificates Feb 13 19:12:22.039862 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:12:22.039872 kernel: Key type .fscrypt registered Feb 13 19:12:22.039879 kernel: Key type fscrypt-provisioning registered Feb 13 19:12:22.039887 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:12:22.039894 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:12:22.039905 kernel: ima: No architecture policies found Feb 13 19:12:22.039923 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:12:22.039933 kernel: clk: Disabling unused clocks Feb 13 19:12:22.039941 kernel: Freeing unused kernel memory: 38336K Feb 13 19:12:22.039955 kernel: Run /init as init process Feb 13 19:12:22.039963 kernel: with arguments: Feb 13 19:12:22.039972 kernel: /init Feb 13 19:12:22.039979 kernel: with environment: Feb 13 19:12:22.039986 kernel: HOME=/ Feb 13 19:12:22.039994 kernel: TERM=linux Feb 13 19:12:22.040001 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:12:22.040009 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:12:22.040019 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:12:22.040029 systemd[1]: Detected virtualization kvm. Feb 13 19:12:22.040037 systemd[1]: Detected architecture arm64. Feb 13 19:12:22.040044 systemd[1]: Running in initrd. Feb 13 19:12:22.040052 systemd[1]: No hostname configured, using default hostname. Feb 13 19:12:22.040060 systemd[1]: Hostname set to . Feb 13 19:12:22.040067 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:12:22.040075 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:12:22.040085 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:12:22.040093 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:12:22.040102 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:12:22.040110 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:12:22.040118 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:12:22.040126 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:12:22.040135 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:12:22.040145 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:12:22.040153 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:12:22.040161 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:12:22.040169 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:12:22.040177 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:12:22.040184 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:12:22.040193 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:12:22.040200 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:12:22.040208 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:12:22.040218 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:12:22.040226 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:12:22.040238 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:12:22.040250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:12:22.040258 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:12:22.040266 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:12:22.040274 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:12:22.040282 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:12:22.040291 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:12:22.040299 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:12:22.040307 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:12:22.040315 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:12:22.040323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:12:22.040331 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:12:22.040339 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:12:22.040349 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:12:22.040378 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 19:12:22.040400 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:12:22.040409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:12:22.040417 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:12:22.040425 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:12:22.040434 systemd-journald[237]: Journal started Feb 13 19:12:22.040452 systemd-journald[237]: Runtime Journal (/run/log/journal/ecb1d01cb2314bb3bced0005089be627) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:12:22.026276 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 19:12:22.042416 kernel: Bridge firewalling registered Feb 13 19:12:22.042436 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:12:22.041704 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 19:12:22.043491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:12:22.044809 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:12:22.047922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:12:22.050180 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:12:22.053852 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:12:22.056497 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:12:22.060316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:12:22.061556 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:12:22.064411 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:12:22.075042 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:12:22.077029 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:12:22.087491 dracut-cmdline[278]: dracut-dracut-053 Feb 13 19:12:22.090192 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:12:22.111190 systemd-resolved[280]: Positive Trust Anchors: Feb 13 19:12:22.111209 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:12:22.111240 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:12:22.116758 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 19:12:22.118391 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:12:22.119304 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:12:22.168832 kernel: SCSI subsystem initialized Feb 13 19:12:22.171846 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:12:22.179852 kernel: iscsi: registered transport (tcp) Feb 13 19:12:22.192068 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:12:22.192107 kernel: QLogic iSCSI HBA Driver Feb 13 19:12:22.235629 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:12:22.246990 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:12:22.263981 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:12:22.264041 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:12:22.265315 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:12:22.310844 kernel: raid6: neonx8 gen() 13958 MB/s Feb 13 19:12:22.327832 kernel: raid6: neonx4 gen() 15804 MB/s Feb 13 19:12:22.344830 kernel: raid6: neonx2 gen() 13183 MB/s Feb 13 19:12:22.361829 kernel: raid6: neonx1 gen() 10533 MB/s Feb 13 19:12:22.378830 kernel: raid6: int64x8 gen() 6783 MB/s Feb 13 19:12:22.395836 kernel: raid6: int64x4 gen() 7352 MB/s Feb 13 19:12:22.412830 kernel: raid6: int64x2 gen() 6106 MB/s Feb 13 19:12:22.429835 kernel: raid6: int64x1 gen() 5058 MB/s Feb 13 19:12:22.429860 kernel: raid6: using algorithm neonx4 gen() 15804 MB/s Feb 13 19:12:22.446836 kernel: raid6: .... xor() 12336 MB/s, rmw enabled Feb 13 19:12:22.446850 kernel: raid6: using neon recovery algorithm Feb 13 19:12:22.452122 kernel: xor: measuring software checksum speed Feb 13 19:12:22.452150 kernel: 8regs : 21636 MB/sec Feb 13 19:12:22.452177 kernel: 32regs : 21699 MB/sec Feb 13 19:12:22.453056 kernel: arm64_neon : 27747 MB/sec Feb 13 19:12:22.453068 kernel: xor: using function: arm64_neon (27747 MB/sec) Feb 13 19:12:22.502847 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:12:22.513703 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:12:22.528036 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:12:22.542655 systemd-udevd[464]: Using default interface naming scheme 'v255'. Feb 13 19:12:22.546355 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:12:22.548738 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:12:22.564170 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Feb 13 19:12:22.591989 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:12:22.602030 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:12:22.645533 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:12:22.652992 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:12:22.663994 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:12:22.666506 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:12:22.668007 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:12:22.669951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:12:22.677051 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:12:22.687722 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:12:22.698900 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:12:22.710253 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:12:22.710363 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:12:22.710381 kernel: GPT:9289727 != 19775487 Feb 13 19:12:22.710391 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:12:22.710401 kernel: GPT:9289727 != 19775487 Feb 13 19:12:22.710410 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:12:22.710419 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:12:22.703656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:12:22.703790 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:12:22.706087 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:12:22.709060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:12:22.709225 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:12:22.712060 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:12:22.721111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:12:22.729842 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (525) Feb 13 19:12:22.732055 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (516) Feb 13 19:12:22.735596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:12:22.752327 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:12:22.759856 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:12:22.765900 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:12:22.766911 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:12:22.774969 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:12:22.783989 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:12:22.785616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:12:22.789971 disk-uuid[551]: Primary Header is updated. Feb 13 19:12:22.789971 disk-uuid[551]: Secondary Entries is updated. Feb 13 19:12:22.789971 disk-uuid[551]: Secondary Header is updated. Feb 13 19:12:22.792836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:12:22.812260 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:12:23.804856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:12:23.805970 disk-uuid[552]: The operation has completed successfully. Feb 13 19:12:23.828872 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:12:23.828994 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:12:23.871019 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:12:23.874006 sh[573]: Success Feb 13 19:12:23.887861 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:12:23.919209 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:12:23.927437 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:12:23.929613 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:12:23.939894 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:12:23.939954 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:12:23.939966 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:12:23.941232 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:12:23.941248 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:12:23.945005 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:12:23.946234 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:12:23.947075 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:12:23.949157 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:12:23.962910 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:12:23.962969 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:12:23.962981 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:12:23.965970 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:12:23.974857 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:12:23.980875 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:12:23.991084 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:12:24.004534 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:12:24.050703 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:12:24.062063 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:12:24.094475 systemd-networkd[765]: lo: Link UP Feb 13 19:12:24.094490 systemd-networkd[765]: lo: Gained carrier Feb 13 19:12:24.095404 systemd-networkd[765]: Enumeration completed Feb 13 19:12:24.095597 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:12:24.096934 systemd[1]: Reached target network.target - Network. Feb 13 19:12:24.097799 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:12:24.097803 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:12:24.098530 systemd-networkd[765]: eth0: Link UP Feb 13 19:12:24.098533 systemd-networkd[765]: eth0: Gained carrier Feb 13 19:12:24.098542 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:12:24.105524 ignition[674]: Ignition 2.20.0 Feb 13 19:12:24.105538 ignition[674]: Stage: fetch-offline Feb 13 19:12:24.105589 ignition[674]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:12:24.105599 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:12:24.105762 ignition[674]: parsed url from cmdline: "" Feb 13 19:12:24.105765 ignition[674]: no config URL provided Feb 13 19:12:24.105770 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:12:24.105777 ignition[674]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:12:24.105811 ignition[674]: op(1): [started] loading QEMU firmware config module Feb 13 19:12:24.105816 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:12:24.116570 ignition[674]: op(1): [finished] loading QEMU firmware config module Feb 13 19:12:24.116863 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:12:24.123362 ignition[674]: parsing config with SHA512: 58c8bfdb4fc71846bd30f12a0e4f1b6a97b7e998167262cf13ddff384e5b602c60691db297b270d55f907a921725d55c4780a5904dbaacfe5887bbc726e18929 Feb 13 19:12:24.126887 unknown[674]: fetched base config from "system" Feb 13 19:12:24.126898 unknown[674]: fetched user config from "qemu" Feb 13 19:12:24.127159 ignition[674]: fetch-offline: fetch-offline passed Feb 13 19:12:24.128765 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:12:24.127237 ignition[674]: Ignition finished successfully Feb 13 19:12:24.129979 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:12:24.139014 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:12:24.152278 ignition[774]: Ignition 2.20.0 Feb 13 19:12:24.152290 ignition[774]: Stage: kargs Feb 13 19:12:24.152475 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:12:24.152486 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:12:24.153230 ignition[774]: kargs: kargs passed Feb 13 19:12:24.153279 ignition[774]: Ignition finished successfully Feb 13 19:12:24.156529 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:12:24.167047 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:12:24.177679 ignition[783]: Ignition 2.20.0 Feb 13 19:12:24.177691 ignition[783]: Stage: disks Feb 13 19:12:24.177970 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:12:24.180152 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:12:24.177981 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:12:24.181292 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:12:24.178668 ignition[783]: disks: disks passed Feb 13 19:12:24.182524 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:12:24.178712 ignition[783]: Ignition finished successfully Feb 13 19:12:24.184180 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:12:24.185558 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:12:24.186706 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:12:24.197000 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:12:24.208363 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:12:24.212541 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:12:24.216003 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:12:24.265125 kernel: EXT4-fs (vda9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:12:24.265662 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:12:24.266853 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:12:24.276926 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:12:24.278648 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:12:24.279672 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:12:24.279747 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:12:24.279796 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:12:24.286924 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Feb 13 19:12:24.286672 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:12:24.290182 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:12:24.290200 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:12:24.290210 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:12:24.289298 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:12:24.293851 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:12:24.294238 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:12:24.332421 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:12:24.336467 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:12:24.341182 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:12:24.344907 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:12:24.426869 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:12:24.445044 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:12:24.446542 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:12:24.451856 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:12:24.472195 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:12:24.474212 ignition[915]: INFO : Ignition 2.20.0 Feb 13 19:12:24.474212 ignition[915]: INFO : Stage: mount Feb 13 19:12:24.474212 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:12:24.474212 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:12:24.474212 ignition[915]: INFO : mount: mount passed Feb 13 19:12:24.474212 ignition[915]: INFO : Ignition finished successfully Feb 13 19:12:24.477140 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:12:24.484996 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:12:25.005537 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:12:25.014066 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:12:25.019849 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Feb 13 19:12:25.022266 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:12:25.022282 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:12:25.022303 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:12:25.024847 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:12:25.025500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:12:25.042699 ignition[946]: INFO : Ignition 2.20.0 Feb 13 19:12:25.042699 ignition[946]: INFO : Stage: files Feb 13 19:12:25.044091 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:12:25.044091 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:12:25.044091 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:12:25.046913 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:12:25.046913 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:12:25.048966 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:12:25.048966 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:12:25.048966 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:12:25.047840 unknown[946]: wrote ssh authorized keys file for user: core Feb 13 19:12:25.052863 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:12:25.052863 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:12:25.052863 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:12:25.052863 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:12:25.052863 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:12:25.052863 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:12:25.052863 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:12:25.052863 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:12:25.377297 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:12:25.621792 systemd-networkd[765]: eth0: Gained IPv6LL Feb 13 19:12:25.629442 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:12:25.629442 ignition[946]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:12:25.632373 ignition[946]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:12:25.632373 ignition[946]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:12:25.632373 ignition[946]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:12:25.632373 ignition[946]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:12:25.646141 ignition[946]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:12:25.650806 ignition[946]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:12:25.651918 ignition[946]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:12:25.651918 ignition[946]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:12:25.651918 ignition[946]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:12:25.651918 ignition[946]: INFO : files: files passed Feb 13 19:12:25.651918 ignition[946]: INFO : Ignition finished successfully Feb 13 19:12:25.654470 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:12:25.671047 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:12:25.674122 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:12:25.675264 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:12:25.675358 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:12:25.683303 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:12:25.686234 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:12:25.686234 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:12:25.688662 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:12:25.688843 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:12:25.690999 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:12:25.702041 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:12:25.722912 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:12:25.723111 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:12:25.725155 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:12:25.726306 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:12:25.727604 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:12:25.728585 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:12:25.744746 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:12:25.757047 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:12:25.765259 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:12:25.766294 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:12:25.767907 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:12:25.769296 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:12:25.769435 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:12:25.771356 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:12:25.772802 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:12:25.774035 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:12:25.775332 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:12:25.776700 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:12:25.778191 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:12:25.779525 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:12:25.781117 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:12:25.782888 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:12:25.784281 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:12:25.785465 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:12:25.785604 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:12:25.787318 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:12:25.788854 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:12:25.790318 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:12:25.791928 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:12:25.792923 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:12:25.793063 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:12:25.795394 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:12:25.795529 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:12:25.797005 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:12:25.798163 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:12:25.801921 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:12:25.803973 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:12:25.804720 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:12:25.806710 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:12:25.806922 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:12:25.808062 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:12:25.808151 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:12:25.809401 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:12:25.809522 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:12:25.810938 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:12:25.811047 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:12:25.824075 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:12:25.826779 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:12:25.827568 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:12:25.827697 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:12:25.829423 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:12:25.829536 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:12:25.835667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:12:25.836515 ignition[1002]: INFO : Ignition 2.20.0 Feb 13 19:12:25.836515 ignition[1002]: INFO : Stage: umount Feb 13 19:12:25.836515 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:12:25.836515 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:12:25.842619 ignition[1002]: INFO : umount: umount passed Feb 13 19:12:25.842619 ignition[1002]: INFO : Ignition finished successfully Feb 13 19:12:25.837285 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:12:25.839560 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:12:25.839639 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:12:25.842658 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:12:25.843119 systemd[1]: Stopped target network.target - Network. Feb 13 19:12:25.844088 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:12:25.844159 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:12:25.845657 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:12:25.845706 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:12:25.850743 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:12:25.850813 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:12:25.852100 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:12:25.852147 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:12:25.853780 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:12:25.855146 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:12:25.857012 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:12:25.857107 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:12:25.858676 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:12:25.858890 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:12:25.862604 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:12:25.862737 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:12:25.867339 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:12:25.867584 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:12:25.867694 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:12:25.873715 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:12:25.875369 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:12:25.875514 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:12:25.889982 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:12:25.890813 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:12:25.890914 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:12:25.892723 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:12:25.892784 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:12:25.895359 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:12:25.895411 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:12:25.896970 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:12:25.897017 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:12:25.899331 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:12:25.909539 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:12:25.909662 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:12:25.918619 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:12:25.918788 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:12:25.920758 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:12:25.920831 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:12:25.922132 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:12:25.922162 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:12:25.923599 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:12:25.923647 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:12:25.926413 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:12:25.926506 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:12:25.928245 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:12:25.928298 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:12:25.941058 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:12:25.941960 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:12:25.942028 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:12:25.944519 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:12:25.944570 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:12:25.946378 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:12:25.946422 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:12:25.947989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:12:25.948031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:12:25.950700 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:12:25.950800 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:12:25.952615 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:12:25.954415 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:12:25.964873 systemd[1]: Switching root. Feb 13 19:12:26.001030 systemd-journald[237]: Journal stopped Feb 13 19:12:26.708371 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 19:12:26.708433 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:12:26.708447 kernel: SELinux: policy capability open_perms=1 Feb 13 19:12:26.708460 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:12:26.708477 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:12:26.708486 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:12:26.708496 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:12:26.708505 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:12:26.708514 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:12:26.708524 kernel: audit: type=1403 audit(1739473946.126:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:12:26.708535 systemd[1]: Successfully loaded SELinux policy in 34.726ms. Feb 13 19:12:26.708551 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.237ms. Feb 13 19:12:26.708563 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:12:26.708575 systemd[1]: Detected virtualization kvm. Feb 13 19:12:26.708585 systemd[1]: Detected architecture arm64. Feb 13 19:12:26.708597 systemd[1]: Detected first boot. Feb 13 19:12:26.708607 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:12:26.708617 zram_generator::config[1048]: No configuration found. Feb 13 19:12:26.708630 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:12:26.708640 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:12:26.708651 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:12:26.708663 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:12:26.708673 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:12:26.708684 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:12:26.708696 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:12:26.708707 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:12:26.708718 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:12:26.708729 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:12:26.708739 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:12:26.708751 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:12:26.708776 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:12:26.708789 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:12:26.708800 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:12:26.708810 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:12:26.708853 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:12:26.708869 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:12:26.708880 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:12:26.708891 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:12:26.708901 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:12:26.708911 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:12:26.708922 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:12:26.708932 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:12:26.708945 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:12:26.708962 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:12:26.708972 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:12:26.708984 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:12:26.708996 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:12:26.709006 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:12:26.709017 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:12:26.709028 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:12:26.709039 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:12:26.709049 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:12:26.709061 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:12:26.709072 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:12:26.709083 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:12:26.709093 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:12:26.709106 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:12:26.709116 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:12:26.709127 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:12:26.709138 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:12:26.709153 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:12:26.709166 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:12:26.709177 systemd[1]: Reached target machines.target - Containers. Feb 13 19:12:26.709188 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:12:26.709199 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:12:26.709209 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:12:26.709220 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:12:26.709231 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:12:26.709241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:12:26.709253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:12:26.709264 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:12:26.709274 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:12:26.709285 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:12:26.709295 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:12:26.709306 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:12:26.709316 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:12:26.709327 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:12:26.709339 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:12:26.709351 kernel: loop: module loaded Feb 13 19:12:26.709361 kernel: fuse: init (API version 7.39) Feb 13 19:12:26.709370 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:12:26.709380 kernel: ACPI: bus type drm_connector registered Feb 13 19:12:26.709390 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:12:26.709401 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:12:26.709412 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:12:26.709422 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:12:26.709434 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:12:26.709445 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:12:26.709484 systemd-journald[1118]: Collecting audit messages is disabled. Feb 13 19:12:26.709507 systemd[1]: Stopped verity-setup.service. Feb 13 19:12:26.709521 systemd-journald[1118]: Journal started Feb 13 19:12:26.709542 systemd-journald[1118]: Runtime Journal (/run/log/journal/ecb1d01cb2314bb3bced0005089be627) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:12:26.519111 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:12:26.531973 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:12:26.532454 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:12:26.712511 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:12:26.713231 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:12:26.714255 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:12:26.715368 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:12:26.716254 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:12:26.717180 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:12:26.718161 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:12:26.720853 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:12:26.722288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:12:26.723729 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:12:26.723926 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:12:26.725101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:12:26.725260 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:12:26.726483 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:12:26.726667 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:12:26.728111 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:12:26.728283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:12:26.729599 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:12:26.729769 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:12:26.731153 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:12:26.731312 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:12:26.732545 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:12:26.733796 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:12:26.735338 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:12:26.736571 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:12:26.749091 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:12:26.759936 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:12:26.761999 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:12:26.762872 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:12:26.762908 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:12:26.764624 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:12:26.766687 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:12:26.771085 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:12:26.772148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:12:26.773549 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:12:26.775769 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:12:26.776824 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:12:26.781096 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:12:26.782288 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:12:26.786069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:12:26.786298 systemd-journald[1118]: Time spent on flushing to /var/log/journal/ecb1d01cb2314bb3bced0005089be627 is 12.037ms for 849 entries. Feb 13 19:12:26.786298 systemd-journald[1118]: System Journal (/var/log/journal/ecb1d01cb2314bb3bced0005089be627) is 8M, max 195.6M, 187.6M free. Feb 13 19:12:26.806293 systemd-journald[1118]: Received client request to flush runtime journal. Feb 13 19:12:26.790094 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:12:26.795594 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:12:26.798713 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:12:26.800020 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:12:26.801271 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:12:26.802555 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:12:26.804026 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:12:26.808677 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:12:26.813917 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:12:26.822874 kernel: loop0: detected capacity change from 0 to 113512 Feb 13 19:12:26.828320 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:12:26.831465 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:12:26.834431 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:12:26.843909 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:12:26.846409 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Feb 13 19:12:26.846425 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Feb 13 19:12:26.854742 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:12:26.869144 kernel: loop1: detected capacity change from 0 to 123192 Feb 13 19:12:26.868324 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:12:26.869510 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:12:26.871814 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:12:26.892339 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:12:26.901451 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:12:26.905860 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:12:26.916769 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 19:12:26.916790 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 19:12:26.921405 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:12:26.935843 kernel: loop3: detected capacity change from 0 to 113512 Feb 13 19:12:26.940847 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 19:12:26.945842 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 19:12:26.950406 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:12:26.950874 (sd-merge)[1193]: Merged extensions into '/usr'. Feb 13 19:12:26.954364 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:12:26.954382 systemd[1]: Reloading... Feb 13 19:12:27.016980 zram_generator::config[1220]: No configuration found. Feb 13 19:12:27.085897 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:12:27.121447 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:12:27.171313 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:12:27.172004 systemd[1]: Reloading finished in 217 ms. Feb 13 19:12:27.197247 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:12:27.200122 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:12:27.214279 systemd[1]: Starting ensure-sysext.service... Feb 13 19:12:27.216326 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:12:27.228579 systemd[1]: Reload requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:12:27.228597 systemd[1]: Reloading... Feb 13 19:12:27.241361 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:12:27.241672 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:12:27.242381 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:12:27.242589 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Feb 13 19:12:27.242636 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Feb 13 19:12:27.246288 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:12:27.246300 systemd-tmpfiles[1256]: Skipping /boot Feb 13 19:12:27.255556 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:12:27.255573 systemd-tmpfiles[1256]: Skipping /boot Feb 13 19:12:27.282115 zram_generator::config[1285]: No configuration found. Feb 13 19:12:27.373031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:12:27.424054 systemd[1]: Reloading finished in 195 ms. Feb 13 19:12:27.437581 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:12:27.455862 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:12:27.463963 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:12:27.466521 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:12:27.471909 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:12:27.475845 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:12:27.485146 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:12:27.487630 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:12:27.492803 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:12:27.495509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:12:27.497495 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:12:27.503173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:12:27.506232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:12:27.507295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:12:27.507432 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:12:27.510173 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:12:27.521192 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:12:27.523336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:12:27.523691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:12:27.524740 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Feb 13 19:12:27.525404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:12:27.527697 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:12:27.529175 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:12:27.529330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:12:27.539988 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:12:27.545524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:12:27.547730 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:12:27.549995 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:12:27.565977 systemd[1]: Finished ensure-sysext.service. Feb 13 19:12:27.567376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:12:27.573213 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:12:27.575198 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:12:27.579694 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:12:27.582103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:12:27.583038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:12:27.583086 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:12:27.586479 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:12:27.591031 augenrules[1383]: No rules Feb 13 19:12:27.593071 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:12:27.594706 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:12:27.595142 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:12:27.597209 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:12:27.598870 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:12:27.601170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:12:27.601336 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:12:27.602544 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:12:27.604884 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:12:27.606083 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:12:27.606248 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:12:27.618082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:12:27.623123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:12:27.624526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:12:27.636198 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:12:27.636606 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:12:27.663938 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1358) Feb 13 19:12:27.679046 systemd-resolved[1325]: Positive Trust Anchors: Feb 13 19:12:27.679066 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:12:27.679102 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:12:27.688705 systemd-resolved[1325]: Defaulting to hostname 'linux'. Feb 13 19:12:27.692881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:12:27.699994 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:12:27.701006 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:12:27.714106 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:12:27.723426 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:12:27.724533 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:12:27.731283 systemd-networkd[1382]: lo: Link UP Feb 13 19:12:27.731290 systemd-networkd[1382]: lo: Gained carrier Feb 13 19:12:27.732190 systemd-networkd[1382]: Enumeration completed Feb 13 19:12:27.733332 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:12:27.733343 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:12:27.733952 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:12:27.734169 systemd-networkd[1382]: eth0: Link UP Feb 13 19:12:27.734178 systemd-networkd[1382]: eth0: Gained carrier Feb 13 19:12:27.734195 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:12:27.735242 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:12:27.738144 systemd[1]: Reached target network.target - Network. Feb 13 19:12:27.749895 systemd-networkd[1382]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:12:27.750045 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:12:27.750719 systemd-timesyncd[1392]: Network configuration changed, trying to establish connection. Feb 13 19:12:27.751596 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:12:27.751645 systemd-timesyncd[1392]: Initial clock synchronization to Thu 2025-02-13 19:12:27.718281 UTC. Feb 13 19:12:27.752232 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:12:27.757103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:12:27.769692 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:12:27.771111 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:12:27.785010 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:12:27.802859 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:12:27.808060 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:12:27.835488 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:12:27.836713 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:12:27.837619 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:12:27.838544 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:12:27.839471 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:12:27.840558 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:12:27.841463 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:12:27.842400 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:12:27.843348 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:12:27.843383 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:12:27.844091 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:12:27.846893 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:12:27.849426 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:12:27.853418 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:12:27.854873 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:12:27.855937 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:12:27.859194 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:12:27.860659 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:12:27.862944 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:12:27.864383 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:12:27.865344 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:12:27.866172 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:12:27.866935 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:12:27.866976 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:12:27.868119 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:12:27.871056 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:12:27.871134 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:12:27.874003 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:12:27.879239 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:12:27.880113 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:12:27.882715 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:12:27.887752 jq[1430]: false Feb 13 19:12:27.891008 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:12:27.895127 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:12:27.899673 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:12:27.901600 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:12:27.902177 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:12:27.903942 extend-filesystems[1431]: Found loop3 Feb 13 19:12:27.903942 extend-filesystems[1431]: Found loop4 Feb 13 19:12:27.903942 extend-filesystems[1431]: Found loop5 Feb 13 19:12:27.903942 extend-filesystems[1431]: Found vda Feb 13 19:12:27.903942 extend-filesystems[1431]: Found vda1 Feb 13 19:12:27.903942 extend-filesystems[1431]: Found vda2 Feb 13 19:12:27.902432 dbus-daemon[1429]: [system] SELinux support is enabled Feb 13 19:12:27.904077 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:12:27.912192 extend-filesystems[1431]: Found vda3 Feb 13 19:12:27.912192 extend-filesystems[1431]: Found usr Feb 13 19:12:27.912192 extend-filesystems[1431]: Found vda4 Feb 13 19:12:27.912192 extend-filesystems[1431]: Found vda6 Feb 13 19:12:27.912192 extend-filesystems[1431]: Found vda7 Feb 13 19:12:27.912192 extend-filesystems[1431]: Found vda9 Feb 13 19:12:27.912192 extend-filesystems[1431]: Checking size of /dev/vda9 Feb 13 19:12:27.908046 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:12:27.918606 jq[1445]: true Feb 13 19:12:27.909762 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:12:27.913018 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:12:27.923152 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:12:27.923923 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:12:27.924222 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:12:27.924391 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:12:27.926225 extend-filesystems[1431]: Resized partition /dev/vda9 Feb 13 19:12:27.927295 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:12:27.927495 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:12:27.931939 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:12:27.938792 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:12:27.940530 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:12:27.940563 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:12:27.944915 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:12:27.944979 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1366) Feb 13 19:12:27.944932 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:12:27.944954 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:12:27.959967 jq[1453]: true Feb 13 19:12:27.976940 update_engine[1443]: I20250213 19:12:27.976399 1443 main.cc:92] Flatcar Update Engine starting Feb 13 19:12:27.981476 update_engine[1443]: I20250213 19:12:27.981314 1443 update_check_scheduler.cc:74] Next update check in 11m49s Feb 13 19:12:27.981999 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:12:27.989224 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:12:27.991121 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:12:28.003620 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:12:28.003620 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:12:28.003620 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:12:28.010785 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Feb 13 19:12:28.003683 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:12:28.004803 systemd-logind[1438]: New seat seat0. Feb 13 19:12:28.009575 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:12:28.010609 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:12:28.014523 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:12:28.055880 bash[1480]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:12:28.057849 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:12:28.060861 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:12:28.094274 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:12:28.176593 containerd[1454]: time="2025-02-13T19:12:28.176408864Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:12:28.200931 containerd[1454]: time="2025-02-13T19:12:28.200871548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:12:28.202360 containerd[1454]: time="2025-02-13T19:12:28.202315526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:12:28.202360 containerd[1454]: time="2025-02-13T19:12:28.202353647Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:12:28.202458 containerd[1454]: time="2025-02-13T19:12:28.202373106Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:12:28.202579 containerd[1454]: time="2025-02-13T19:12:28.202550961Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:12:28.202579 containerd[1454]: time="2025-02-13T19:12:28.202576175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:12:28.202651 containerd[1454]: time="2025-02-13T19:12:28.202636952Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:12:28.202679 containerd[1454]: time="2025-02-13T19:12:28.202653455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:12:28.202899 containerd[1454]: time="2025-02-13T19:12:28.202879859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:12:28.202923 containerd[1454]: time="2025-02-13T19:12:28.202900598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:12:28.202923 containerd[1454]: time="2025-02-13T19:12:28.202915702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:12:28.202964 containerd[1454]: time="2025-02-13T19:12:28.202924972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:12:28.203022 containerd[1454]: time="2025-02-13T19:12:28.203009005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:12:28.203233 containerd[1454]: time="2025-02-13T19:12:28.203209517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:12:28.203362 containerd[1454]: time="2025-02-13T19:12:28.203346934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:12:28.203383 containerd[1454]: time="2025-02-13T19:12:28.203363996Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:12:28.203452 containerd[1454]: time="2025-02-13T19:12:28.203440317Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:12:28.203498 containerd[1454]: time="2025-02-13T19:12:28.203487547Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:12:28.206937 containerd[1454]: time="2025-02-13T19:12:28.206900718Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:12:28.207019 containerd[1454]: time="2025-02-13T19:12:28.206967649Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:12:28.207019 containerd[1454]: time="2025-02-13T19:12:28.206982633Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:12:28.207019 containerd[1454]: time="2025-02-13T19:12:28.207000495Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:12:28.207019 containerd[1454]: time="2025-02-13T19:12:28.207015399Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:12:28.207194 containerd[1454]: time="2025-02-13T19:12:28.207175633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:12:28.207444 containerd[1454]: time="2025-02-13T19:12:28.207429169Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:12:28.207575 containerd[1454]: time="2025-02-13T19:12:28.207559154Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:12:28.207602 containerd[1454]: time="2025-02-13T19:12:28.207579852Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:12:28.207602 containerd[1454]: time="2025-02-13T19:12:28.207594197Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:12:28.207650 containerd[1454]: time="2025-02-13T19:12:28.207607104Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:12:28.207650 containerd[1454]: time="2025-02-13T19:12:28.207620210Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:12:28.207650 containerd[1454]: time="2025-02-13T19:12:28.207632398Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:12:28.207650 containerd[1454]: time="2025-02-13T19:12:28.207648781Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:12:28.207714 containerd[1454]: time="2025-02-13T19:12:28.207664684Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:12:28.207714 containerd[1454]: time="2025-02-13T19:12:28.207677870Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:12:28.207714 containerd[1454]: time="2025-02-13T19:12:28.207689818Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:12:28.207714 containerd[1454]: time="2025-02-13T19:12:28.207700487Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:12:28.207780 containerd[1454]: time="2025-02-13T19:12:28.207728098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207780 containerd[1454]: time="2025-02-13T19:12:28.207743762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207780 containerd[1454]: time="2025-02-13T19:12:28.207755749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207780 containerd[1454]: time="2025-02-13T19:12:28.207767577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207780 containerd[1454]: time="2025-02-13T19:12:28.207779405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207883 containerd[1454]: time="2025-02-13T19:12:28.207792711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207883 containerd[1454]: time="2025-02-13T19:12:28.207804019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207883 containerd[1454]: time="2025-02-13T19:12:28.207836785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207883 containerd[1454]: time="2025-02-13T19:12:28.207851770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207883 containerd[1454]: time="2025-02-13T19:12:28.207865675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207964 containerd[1454]: time="2025-02-13T19:12:28.207888012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207964 containerd[1454]: time="2025-02-13T19:12:28.207900479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207964 containerd[1454]: time="2025-02-13T19:12:28.207913545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.207964 containerd[1454]: time="2025-02-13T19:12:28.207930727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:12:28.207964 containerd[1454]: time="2025-02-13T19:12:28.207952025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.208043 containerd[1454]: time="2025-02-13T19:12:28.207966370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.208043 containerd[1454]: time="2025-02-13T19:12:28.207977359Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:12:28.208166 containerd[1454]: time="2025-02-13T19:12:28.208148341Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:12:28.208191 containerd[1454]: time="2025-02-13T19:12:28.208172156Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:12:28.208191 containerd[1454]: time="2025-02-13T19:12:28.208183704Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:12:28.208264 containerd[1454]: time="2025-02-13T19:12:28.208195732Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:12:28.208264 containerd[1454]: time="2025-02-13T19:12:28.208204922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.208264 containerd[1454]: time="2025-02-13T19:12:28.208218348Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:12:28.208264 containerd[1454]: time="2025-02-13T19:12:28.208228378Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:12:28.208264 containerd[1454]: time="2025-02-13T19:12:28.208239327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:12:28.208627 containerd[1454]: time="2025-02-13T19:12:28.208573379Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:12:28.208746 containerd[1454]: time="2025-02-13T19:12:28.208635794Z" level=info msg="Connect containerd service" Feb 13 19:12:28.208746 containerd[1454]: time="2025-02-13T19:12:28.208667521Z" level=info msg="using legacy CRI server" Feb 13 19:12:28.208746 containerd[1454]: time="2025-02-13T19:12:28.208674714Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:12:28.208960 containerd[1454]: time="2025-02-13T19:12:28.208943434Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:12:28.210795 containerd[1454]: time="2025-02-13T19:12:28.210748717Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:12:28.211292 containerd[1454]: time="2025-02-13T19:12:28.210922057Z" level=info msg="Start subscribing containerd event" Feb 13 19:12:28.211292 containerd[1454]: time="2025-02-13T19:12:28.210984512Z" level=info msg="Start recovering state" Feb 13 19:12:28.211292 containerd[1454]: time="2025-02-13T19:12:28.211059714Z" level=info msg="Start event monitor" Feb 13 19:12:28.211292 containerd[1454]: time="2025-02-13T19:12:28.211071422Z" level=info msg="Start snapshots syncer" Feb 13 19:12:28.211292 containerd[1454]: time="2025-02-13T19:12:28.211081291Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:12:28.211292 containerd[1454]: time="2025-02-13T19:12:28.211088764Z" level=info msg="Start streaming server" Feb 13 19:12:28.211806 containerd[1454]: time="2025-02-13T19:12:28.211778886Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:12:28.211874 containerd[1454]: time="2025-02-13T19:12:28.211861760Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:12:28.213249 containerd[1454]: time="2025-02-13T19:12:28.211920179Z" level=info msg="containerd successfully booted in 0.036538s" Feb 13 19:12:28.212066 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:12:28.982212 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:12:29.002224 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:12:29.025362 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:12:29.031316 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:12:29.031612 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:12:29.035780 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:12:29.049869 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:12:29.060186 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:12:29.062375 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:12:29.063524 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:12:29.460940 systemd-networkd[1382]: eth0: Gained IPv6LL Feb 13 19:12:29.465896 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:12:29.467869 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:12:29.479126 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:12:29.481849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:12:29.484051 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:12:29.503153 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:12:29.503435 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:12:29.505414 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:12:29.509437 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:12:29.976225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:12:29.977598 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:12:29.980641 (kubelet)[1534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:12:29.981934 systemd[1]: Startup finished in 704ms (kernel) + 4.342s (initrd) + 3.889s (userspace) = 8.936s. Feb 13 19:12:30.472789 kubelet[1534]: E0213 19:12:30.472634 1534 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:12:30.474496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:12:30.474640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:12:30.477915 systemd[1]: kubelet.service: Consumed 854ms CPU time, 242.9M memory peak. Feb 13 19:12:34.248580 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:12:34.250148 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:56000.service - OpenSSH per-connection server daemon (10.0.0.1:56000). Feb 13 19:12:34.311449 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 56000 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:12:34.315106 sshd-session[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:12:34.325521 systemd-logind[1438]: New session 1 of user core. Feb 13 19:12:34.326569 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:12:34.347131 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:12:34.357848 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:12:34.360003 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:12:34.366748 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:12:34.368810 systemd-logind[1438]: New session c1 of user core. Feb 13 19:12:34.479180 systemd[1552]: Queued start job for default target default.target. Feb 13 19:12:34.490742 systemd[1552]: Created slice app.slice - User Application Slice. Feb 13 19:12:34.490771 systemd[1552]: Reached target paths.target - Paths. Feb 13 19:12:34.490814 systemd[1552]: Reached target timers.target - Timers. Feb 13 19:12:34.492075 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:12:34.501489 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:12:34.501555 systemd[1552]: Reached target sockets.target - Sockets. Feb 13 19:12:34.501593 systemd[1552]: Reached target basic.target - Basic System. Feb 13 19:12:34.501621 systemd[1552]: Reached target default.target - Main User Target. Feb 13 19:12:34.501647 systemd[1552]: Startup finished in 126ms. Feb 13 19:12:34.501794 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:12:34.515052 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:12:34.589664 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:56002.service - OpenSSH per-connection server daemon (10.0.0.1:56002). Feb 13 19:12:34.634856 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 56002 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:12:34.636539 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:12:34.640445 systemd-logind[1438]: New session 2 of user core. Feb 13 19:12:34.655039 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:12:34.707123 sshd[1565]: Connection closed by 10.0.0.1 port 56002 Feb 13 19:12:34.707449 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Feb 13 19:12:34.717044 systemd[1]: sshd@1-10.0.0.88:22-10.0.0.1:56002.service: Deactivated successfully. Feb 13 19:12:34.718678 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:12:34.719349 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:12:34.729195 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:56014.service - OpenSSH per-connection server daemon (10.0.0.1:56014). Feb 13 19:12:34.730463 systemd-logind[1438]: Removed session 2. Feb 13 19:12:34.769384 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 56014 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:12:34.770465 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:12:34.775146 systemd-logind[1438]: New session 3 of user core. Feb 13 19:12:34.785002 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:12:34.832826 sshd[1573]: Connection closed by 10.0.0.1 port 56014 Feb 13 19:12:34.833344 sshd-session[1570]: pam_unix(sshd:session): session closed for user core Feb 13 19:12:34.847027 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:56014.service: Deactivated successfully. Feb 13 19:12:34.848644 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:12:34.850118 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:12:34.861209 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:56030.service - OpenSSH per-connection server daemon (10.0.0.1:56030). Feb 13 19:12:34.862507 systemd-logind[1438]: Removed session 3. Feb 13 19:12:34.901309 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 56030 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:12:34.902617 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:12:34.906620 systemd-logind[1438]: New session 4 of user core. Feb 13 19:12:34.914997 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:12:34.966435 sshd[1581]: Connection closed by 10.0.0.1 port 56030 Feb 13 19:12:34.966954 sshd-session[1578]: pam_unix(sshd:session): session closed for user core Feb 13 19:12:34.977982 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:56030.service: Deactivated successfully. Feb 13 19:12:34.979599 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:12:34.980898 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:12:34.991139 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:56032.service - OpenSSH per-connection server daemon (10.0.0.1:56032). Feb 13 19:12:34.992154 systemd-logind[1438]: Removed session 4. Feb 13 19:12:35.029594 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 56032 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:12:35.031078 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:12:35.035613 systemd-logind[1438]: New session 5 of user core. Feb 13 19:12:35.046056 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:12:35.110048 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:12:35.110346 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:12:35.123941 sudo[1590]: pam_unix(sudo:session): session closed for user root Feb 13 19:12:35.125398 sshd[1589]: Connection closed by 10.0.0.1 port 56032 Feb 13 19:12:35.125784 sshd-session[1586]: pam_unix(sshd:session): session closed for user core Feb 13 19:12:35.145289 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:56032.service: Deactivated successfully. Feb 13 19:12:35.148305 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:12:35.149651 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:12:35.151760 systemd[1]: Started sshd@5-10.0.0.88:22-10.0.0.1:56048.service - OpenSSH per-connection server daemon (10.0.0.1:56048). Feb 13 19:12:35.152595 systemd-logind[1438]: Removed session 5. Feb 13 19:12:35.194604 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 56048 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:12:35.195924 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:12:35.199637 systemd-logind[1438]: New session 6 of user core. Feb 13 19:12:35.211062 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:12:35.261895 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:12:35.262196 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:12:35.265587 sudo[1600]: pam_unix(sudo:session): session closed for user root Feb 13 19:12:35.270586 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:12:35.270906 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:12:35.293187 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:12:35.321769 augenrules[1622]: No rules Feb 13 19:12:35.323071 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:12:35.323291 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:12:35.324581 sudo[1599]: pam_unix(sudo:session): session closed for user root Feb 13 19:12:35.326514 sshd[1598]: Connection closed by 10.0.0.1 port 56048 Feb 13 19:12:35.326429 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Feb 13 19:12:35.336086 systemd[1]: sshd@5-10.0.0.88:22-10.0.0.1:56048.service: Deactivated successfully. Feb 13 19:12:35.337595 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:12:35.338367 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:12:35.351301 systemd[1]: Started sshd@6-10.0.0.88:22-10.0.0.1:56050.service - OpenSSH per-connection server daemon (10.0.0.1:56050). Feb 13 19:12:35.352368 systemd-logind[1438]: Removed session 6. Feb 13 19:12:35.390143 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 56050 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:12:35.391382 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:12:35.395866 systemd-logind[1438]: New session 7 of user core. Feb 13 19:12:35.403041 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:12:35.454425 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:12:35.454731 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:12:35.476154 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:12:35.492336 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:12:35.492551 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:12:36.002345 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:12:36.002502 systemd[1]: kubelet.service: Consumed 854ms CPU time, 242.9M memory peak. Feb 13 19:12:36.017154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:12:36.035156 systemd[1]: Reload requested from client PID 1683 ('systemctl') (unit session-7.scope)... Feb 13 19:12:36.035172 systemd[1]: Reloading... Feb 13 19:12:36.101847 zram_generator::config[1726]: No configuration found. Feb 13 19:12:36.285596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:12:36.356767 systemd[1]: Reloading finished in 321 ms. Feb 13 19:12:36.399664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:12:36.402926 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:12:36.403890 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:12:36.404904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:12:36.404959 systemd[1]: kubelet.service: Consumed 85ms CPU time, 82.3M memory peak. Feb 13 19:12:36.406757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:12:36.496043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:12:36.500479 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:12:36.537864 kubelet[1773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:12:36.537864 kubelet[1773]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:12:36.537864 kubelet[1773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:12:36.537864 kubelet[1773]: I0213 19:12:36.537811 1773 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:12:37.224323 kubelet[1773]: I0213 19:12:37.224268 1773 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:12:37.224323 kubelet[1773]: I0213 19:12:37.224305 1773 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:12:37.224531 kubelet[1773]: I0213 19:12:37.224514 1773 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:12:37.262446 kubelet[1773]: I0213 19:12:37.262405 1773 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:12:37.275946 kubelet[1773]: I0213 19:12:37.275914 1773 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:12:37.276201 kubelet[1773]: I0213 19:12:37.276174 1773 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:12:37.276372 kubelet[1773]: I0213 19:12:37.276201 1773 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:12:37.276457 kubelet[1773]: I0213 19:12:37.276437 1773 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:12:37.276457 kubelet[1773]: I0213 19:12:37.276446 1773 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:12:37.276719 kubelet[1773]: I0213 19:12:37.276701 1773 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:12:37.277664 kubelet[1773]: I0213 19:12:37.277629 1773 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:12:37.277664 kubelet[1773]: I0213 19:12:37.277654 1773 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:12:37.278286 kubelet[1773]: I0213 19:12:37.277750 1773 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:12:37.278286 kubelet[1773]: E0213 19:12:37.277837 1773 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:37.278286 kubelet[1773]: I0213 19:12:37.277915 1773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:12:37.278286 kubelet[1773]: E0213 19:12:37.277948 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:37.279401 kubelet[1773]: I0213 19:12:37.279367 1773 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:12:37.279754 kubelet[1773]: I0213 19:12:37.279738 1773 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:12:37.279884 kubelet[1773]: W0213 19:12:37.279860 1773 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:12:37.280714 kubelet[1773]: I0213 19:12:37.280688 1773 server.go:1264] "Started kubelet" Feb 13 19:12:37.282900 kubelet[1773]: I0213 19:12:37.280877 1773 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:12:37.282900 kubelet[1773]: I0213 19:12:37.281313 1773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:12:37.282900 kubelet[1773]: I0213 19:12:37.281584 1773 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:12:37.282900 kubelet[1773]: I0213 19:12:37.282049 1773 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:12:37.283561 kubelet[1773]: I0213 19:12:37.283539 1773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:12:37.284461 kubelet[1773]: I0213 19:12:37.284361 1773 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:12:37.284521 kubelet[1773]: I0213 19:12:37.284488 1773 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:12:37.285856 kubelet[1773]: I0213 19:12:37.285322 1773 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:12:37.286942 kubelet[1773]: E0213 19:12:37.286522 1773 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.88.1823da5917c1d1af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.88,UID:10.0.0.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.88,},FirstTimestamp:2025-02-13 19:12:37.280666031 +0000 UTC m=+0.777071878,LastTimestamp:2025-02-13 19:12:37.280666031 +0000 UTC m=+0.777071878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.88,}" Feb 13 19:12:37.286942 kubelet[1773]: W0213 19:12:37.286792 1773 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:12:37.286942 kubelet[1773]: E0213 19:12:37.286855 1773 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:12:37.286942 kubelet[1773]: W0213 19:12:37.286899 1773 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.88" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:12:37.286942 kubelet[1773]: E0213 19:12:37.286921 1773 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.88" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:12:37.287509 kubelet[1773]: I0213 19:12:37.287475 1773 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:12:37.287592 kubelet[1773]: I0213 19:12:37.287568 1773 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:12:37.288090 kubelet[1773]: E0213 19:12:37.288055 1773 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:12:37.288720 kubelet[1773]: E0213 19:12:37.288600 1773 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.88\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:12:37.288720 kubelet[1773]: W0213 19:12:37.288698 1773 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:12:37.288720 kubelet[1773]: E0213 19:12:37.288721 1773 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:12:37.288974 kubelet[1773]: I0213 19:12:37.288950 1773 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:12:37.290831 kubelet[1773]: E0213 19:12:37.290698 1773 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.88.1823da5918326174 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.88,UID:10.0.0.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.88,},FirstTimestamp:2025-02-13 19:12:37.288042868 +0000 UTC m=+0.784448715,LastTimestamp:2025-02-13 19:12:37.288042868 +0000 UTC m=+0.784448715,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.88,}" Feb 13 19:12:37.301462 kubelet[1773]: I0213 19:12:37.301429 1773 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:12:37.301462 kubelet[1773]: I0213 19:12:37.301452 1773 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:12:37.301462 kubelet[1773]: I0213 19:12:37.301474 1773 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:12:37.303755 kubelet[1773]: E0213 19:12:37.303526 1773 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.88.1823da5918f12766 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.88,UID:10.0.0.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.88 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.88,},FirstTimestamp:2025-02-13 19:12:37.300545382 +0000 UTC m=+0.796951229,LastTimestamp:2025-02-13 19:12:37.300545382 +0000 UTC m=+0.796951229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.88,}" Feb 13 19:12:37.312373 kubelet[1773]: E0213 19:12:37.312126 1773 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.88.1823da5918f15485 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.88,UID:10.0.0.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.88 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.88,},FirstTimestamp:2025-02-13 19:12:37.300556933 +0000 UTC m=+0.796962740,LastTimestamp:2025-02-13 19:12:37.300556933 +0000 UTC m=+0.796962740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.88,}" Feb 13 19:12:37.323626 kubelet[1773]: E0213 19:12:37.323423 1773 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.88.1823da5918f161a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.88,UID:10.0.0.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.88 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.88,},FirstTimestamp:2025-02-13 19:12:37.30056029 +0000 UTC m=+0.796966097,LastTimestamp:2025-02-13 19:12:37.30056029 +0000 UTC m=+0.796966097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.88,}" Feb 13 19:12:37.386502 kubelet[1773]: I0213 19:12:37.386239 1773 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.88" Feb 13 19:12:37.393328 kubelet[1773]: E0213 19:12:37.393264 1773 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.88" Feb 13 19:12:37.393328 kubelet[1773]: E0213 19:12:37.393202 1773 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.88.1823da5918f12766\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.88.1823da5918f12766 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.88,UID:10.0.0.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.88 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.88,},FirstTimestamp:2025-02-13 19:12:37.300545382 +0000 UTC m=+0.796951229,LastTimestamp:2025-02-13 19:12:37.386184843 +0000 UTC m=+0.882590690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.88,}" Feb 13 19:12:37.394332 kubelet[1773]: E0213 19:12:37.394236 1773 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.88.1823da5918f15485\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.88.1823da5918f15485 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.88,UID:10.0.0.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.88 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.88,},FirstTimestamp:2025-02-13 19:12:37.300556933 +0000 UTC m=+0.796962740,LastTimestamp:2025-02-13 19:12:37.386200271 +0000 UTC m=+0.882606118,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.88,}" Feb 13 19:12:37.401994 kubelet[1773]: E0213 19:12:37.401887 1773 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.88.1823da5918f161a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.88.1823da5918f161a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.88,UID:10.0.0.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.88 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.88,},FirstTimestamp:2025-02-13 19:12:37.30056029 +0000 UTC m=+0.796966097,LastTimestamp:2025-02-13 19:12:37.386204588 +0000 UTC m=+0.882610435,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.88,}" Feb 13 19:12:37.501702 kubelet[1773]: E0213 19:12:37.501578 1773 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.88\" not found" node="10.0.0.88" Feb 13 19:12:37.538088 kubelet[1773]: I0213 19:12:37.538031 1773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:12:37.539047 kubelet[1773]: I0213 19:12:37.539019 1773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:12:37.539091 kubelet[1773]: I0213 19:12:37.539059 1773 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:12:37.539091 kubelet[1773]: I0213 19:12:37.539083 1773 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:12:37.539153 kubelet[1773]: E0213 19:12:37.539129 1773 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:12:37.576078 kubelet[1773]: I0213 19:12:37.576017 1773 policy_none.go:49] "None policy: Start" Feb 13 19:12:37.577022 kubelet[1773]: I0213 19:12:37.577000 1773 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:12:37.577078 kubelet[1773]: I0213 19:12:37.577032 1773 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:12:37.594213 kubelet[1773]: I0213 19:12:37.594180 1773 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.88" Feb 13 19:12:37.639513 kubelet[1773]: E0213 19:12:37.639467 1773 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:12:37.642706 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:12:37.643038 kubelet[1773]: I0213 19:12:37.642697 1773 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.88" Feb 13 19:12:37.654617 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:12:37.657488 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:12:37.668761 kubelet[1773]: I0213 19:12:37.668564 1773 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:12:37.668880 kubelet[1773]: I0213 19:12:37.668778 1773 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:12:37.669192 kubelet[1773]: I0213 19:12:37.668927 1773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:12:37.670490 kubelet[1773]: E0213 19:12:37.670469 1773 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.88\" not found" Feb 13 19:12:37.790554 kubelet[1773]: E0213 19:12:37.790383 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Feb 13 19:12:37.891444 kubelet[1773]: E0213 19:12:37.891396 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Feb 13 19:12:37.991932 kubelet[1773]: E0213 19:12:37.991885 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Feb 13 19:12:38.010478 sudo[1634]: pam_unix(sudo:session): session closed for user root Feb 13 19:12:38.011684 sshd[1633]: Connection closed by 10.0.0.1 port 56050 Feb 13 19:12:38.012038 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Feb 13 19:12:38.015437 systemd[1]: sshd@6-10.0.0.88:22-10.0.0.1:56050.service: Deactivated successfully. Feb 13 19:12:38.017456 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:12:38.017640 systemd[1]: session-7.scope: Consumed 475ms CPU time, 109.9M memory peak. Feb 13 19:12:38.018532 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:12:38.019322 systemd-logind[1438]: Removed session 7. Feb 13 19:12:38.092520 kubelet[1773]: E0213 19:12:38.092387 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Feb 13 19:12:38.193347 kubelet[1773]: I0213 19:12:38.193313 1773 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:12:38.193647 containerd[1454]: time="2025-02-13T19:12:38.193610775Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:12:38.194249 kubelet[1773]: I0213 19:12:38.194058 1773 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:12:38.228464 kubelet[1773]: I0213 19:12:38.228141 1773 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:12:38.228464 kubelet[1773]: W0213 19:12:38.228332 1773 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:12:38.228464 kubelet[1773]: W0213 19:12:38.228368 1773 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:12:38.278624 kubelet[1773]: I0213 19:12:38.278577 1773 apiserver.go:52] "Watching apiserver" Feb 13 19:12:38.278687 kubelet[1773]: E0213 19:12:38.278655 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:38.290489 kubelet[1773]: I0213 19:12:38.290442 1773 topology_manager.go:215] "Topology Admit Handler" podUID="7a1e0f46-d78f-49d9-b6c3-82e3abe5761d" podNamespace="calico-system" podName="calico-node-cdt7w" Feb 13 19:12:38.290577 kubelet[1773]: I0213 19:12:38.290552 1773 topology_manager.go:215] "Topology Admit Handler" podUID="1fca2c23-28a9-4065-bfd1-1c47f655c46e" podNamespace="calico-system" podName="csi-node-driver-f5f7f" Feb 13 19:12:38.290624 kubelet[1773]: I0213 19:12:38.290611 1773 topology_manager.go:215] "Topology Admit Handler" podUID="94a65910-2b72-4b1c-8795-c0efe0acabfc" podNamespace="kube-system" podName="kube-proxy-4v9zp" Feb 13 19:12:38.291013 kubelet[1773]: E0213 19:12:38.290772 1773 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5f7f" podUID="1fca2c23-28a9-4065-bfd1-1c47f655c46e" Feb 13 19:12:38.299302 systemd[1]: Created slice kubepods-besteffort-pod94a65910_2b72_4b1c_8795_c0efe0acabfc.slice - libcontainer container kubepods-besteffort-pod94a65910_2b72_4b1c_8795_c0efe0acabfc.slice. Feb 13 19:12:38.319696 systemd[1]: Created slice kubepods-besteffort-pod7a1e0f46_d78f_49d9_b6c3_82e3abe5761d.slice - libcontainer container kubepods-besteffort-pod7a1e0f46_d78f_49d9_b6c3_82e3abe5761d.slice. Feb 13 19:12:38.386039 kubelet[1773]: I0213 19:12:38.385933 1773 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:12:38.391053 kubelet[1773]: I0213 19:12:38.391001 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-node-certs\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391053 kubelet[1773]: I0213 19:12:38.391035 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-var-run-calico\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391053 kubelet[1773]: I0213 19:12:38.391058 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cthg4\" (UniqueName: \"kubernetes.io/projected/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-kube-api-access-cthg4\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391326 kubelet[1773]: I0213 19:12:38.391077 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1fca2c23-28a9-4065-bfd1-1c47f655c46e-socket-dir\") pod \"csi-node-driver-f5f7f\" (UID: \"1fca2c23-28a9-4065-bfd1-1c47f655c46e\") " pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:38.391326 kubelet[1773]: I0213 19:12:38.391095 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94a65910-2b72-4b1c-8795-c0efe0acabfc-lib-modules\") pod \"kube-proxy-4v9zp\" (UID: \"94a65910-2b72-4b1c-8795-c0efe0acabfc\") " pod="kube-system/kube-proxy-4v9zp" Feb 13 19:12:38.391326 kubelet[1773]: I0213 19:12:38.391123 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-xtables-lock\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391326 kubelet[1773]: I0213 19:12:38.391140 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-cni-net-dir\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391326 kubelet[1773]: I0213 19:12:38.391158 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1fca2c23-28a9-4065-bfd1-1c47f655c46e-varrun\") pod \"csi-node-driver-f5f7f\" (UID: \"1fca2c23-28a9-4065-bfd1-1c47f655c46e\") " pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:38.391503 kubelet[1773]: I0213 19:12:38.391172 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fca2c23-28a9-4065-bfd1-1c47f655c46e-kubelet-dir\") pod \"csi-node-driver-f5f7f\" (UID: \"1fca2c23-28a9-4065-bfd1-1c47f655c46e\") " pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:38.391503 kubelet[1773]: I0213 19:12:38.391186 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtmc\" (UniqueName: \"kubernetes.io/projected/1fca2c23-28a9-4065-bfd1-1c47f655c46e-kube-api-access-tmtmc\") pod \"csi-node-driver-f5f7f\" (UID: \"1fca2c23-28a9-4065-bfd1-1c47f655c46e\") " pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:38.391503 kubelet[1773]: I0213 19:12:38.391206 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-lib-modules\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391503 kubelet[1773]: I0213 19:12:38.391221 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-policysync\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391503 kubelet[1773]: I0213 19:12:38.391248 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-var-lib-calico\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391595 kubelet[1773]: I0213 19:12:38.391288 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-cni-bin-dir\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391595 kubelet[1773]: I0213 19:12:38.391318 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-cni-log-dir\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391595 kubelet[1773]: I0213 19:12:38.391341 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-flexvol-driver-host\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391595 kubelet[1773]: I0213 19:12:38.391357 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a1e0f46-d78f-49d9-b6c3-82e3abe5761d-tigera-ca-bundle\") pod \"calico-node-cdt7w\" (UID: \"7a1e0f46-d78f-49d9-b6c3-82e3abe5761d\") " pod="calico-system/calico-node-cdt7w" Feb 13 19:12:38.391595 kubelet[1773]: I0213 19:12:38.391378 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1fca2c23-28a9-4065-bfd1-1c47f655c46e-registration-dir\") pod \"csi-node-driver-f5f7f\" (UID: \"1fca2c23-28a9-4065-bfd1-1c47f655c46e\") " pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:38.391690 kubelet[1773]: I0213 19:12:38.391399 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/94a65910-2b72-4b1c-8795-c0efe0acabfc-kube-proxy\") pod \"kube-proxy-4v9zp\" (UID: \"94a65910-2b72-4b1c-8795-c0efe0acabfc\") " pod="kube-system/kube-proxy-4v9zp" Feb 13 19:12:38.391690 kubelet[1773]: I0213 19:12:38.391433 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94a65910-2b72-4b1c-8795-c0efe0acabfc-xtables-lock\") pod \"kube-proxy-4v9zp\" (UID: \"94a65910-2b72-4b1c-8795-c0efe0acabfc\") " pod="kube-system/kube-proxy-4v9zp" Feb 13 19:12:38.391690 kubelet[1773]: I0213 19:12:38.391465 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsdrn\" (UniqueName: \"kubernetes.io/projected/94a65910-2b72-4b1c-8795-c0efe0acabfc-kube-api-access-tsdrn\") pod \"kube-proxy-4v9zp\" (UID: \"94a65910-2b72-4b1c-8795-c0efe0acabfc\") " pod="kube-system/kube-proxy-4v9zp" Feb 13 19:12:38.496228 kubelet[1773]: E0213 19:12:38.496173 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:12:38.496228 kubelet[1773]: W0213 19:12:38.496220 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:12:38.496335 kubelet[1773]: E0213 19:12:38.496245 1773 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:12:38.503489 kubelet[1773]: E0213 19:12:38.503309 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:12:38.503489 kubelet[1773]: W0213 19:12:38.503325 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:12:38.503489 kubelet[1773]: E0213 19:12:38.503343 1773 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:12:38.503621 kubelet[1773]: E0213 19:12:38.503519 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:12:38.503621 kubelet[1773]: W0213 19:12:38.503527 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:12:38.503621 kubelet[1773]: E0213 19:12:38.503537 1773 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:12:38.503722 kubelet[1773]: E0213 19:12:38.503657 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:12:38.503722 kubelet[1773]: W0213 19:12:38.503664 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:12:38.503722 kubelet[1773]: E0213 19:12:38.503671 1773 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:12:38.617926 containerd[1454]: time="2025-02-13T19:12:38.617877597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4v9zp,Uid:94a65910-2b72-4b1c-8795-c0efe0acabfc,Namespace:kube-system,Attempt:0,}" Feb 13 19:12:38.622513 containerd[1454]: time="2025-02-13T19:12:38.622475197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdt7w,Uid:7a1e0f46-d78f-49d9-b6c3-82e3abe5761d,Namespace:calico-system,Attempt:0,}" Feb 13 19:12:39.147082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3324785718.mount: Deactivated successfully. Feb 13 19:12:39.153424 containerd[1454]: time="2025-02-13T19:12:39.153374816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:12:39.154682 containerd[1454]: time="2025-02-13T19:12:39.154649122Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:12:39.155994 containerd[1454]: time="2025-02-13T19:12:39.155962319Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:12:39.156711 containerd[1454]: time="2025-02-13T19:12:39.156672998Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:12:39.157238 containerd[1454]: time="2025-02-13T19:12:39.157210324Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:12:39.159566 containerd[1454]: time="2025-02-13T19:12:39.159518232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:12:39.162507 containerd[1454]: time="2025-02-13T19:12:39.162459156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.504059ms" Feb 13 19:12:39.165218 containerd[1454]: time="2025-02-13T19:12:39.165180322Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.64309ms" Feb 13 19:12:39.279735 kubelet[1773]: E0213 19:12:39.279654 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:39.289423 containerd[1454]: time="2025-02-13T19:12:39.286428878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:12:39.289423 containerd[1454]: time="2025-02-13T19:12:39.286915321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:12:39.289423 containerd[1454]: time="2025-02-13T19:12:39.286932948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:12:39.289423 containerd[1454]: time="2025-02-13T19:12:39.288890873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:12:39.292553 containerd[1454]: time="2025-02-13T19:12:39.292453941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:12:39.292553 containerd[1454]: time="2025-02-13T19:12:39.292527567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:12:39.292742 containerd[1454]: time="2025-02-13T19:12:39.292700160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:12:39.293012 containerd[1454]: time="2025-02-13T19:12:39.292928193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:12:39.386022 systemd[1]: Started cri-containerd-1ffe2970d90e5e6c4375d31ace6518e08c4803ac835584507c5372fd0116b57f.scope - libcontainer container 1ffe2970d90e5e6c4375d31ace6518e08c4803ac835584507c5372fd0116b57f. Feb 13 19:12:39.387435 systemd[1]: Started cri-containerd-f35a1e451c59379b360941e0f531d12abbffab2af50690472a67b809e9a3b005.scope - libcontainer container f35a1e451c59379b360941e0f531d12abbffab2af50690472a67b809e9a3b005. Feb 13 19:12:39.410219 containerd[1454]: time="2025-02-13T19:12:39.409959721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdt7w,Uid:7a1e0f46-d78f-49d9-b6c3-82e3abe5761d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ffe2970d90e5e6c4375d31ace6518e08c4803ac835584507c5372fd0116b57f\"" Feb 13 19:12:39.413616 containerd[1454]: time="2025-02-13T19:12:39.413455038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:12:39.414942 containerd[1454]: time="2025-02-13T19:12:39.414897021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4v9zp,Uid:94a65910-2b72-4b1c-8795-c0efe0acabfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f35a1e451c59379b360941e0f531d12abbffab2af50690472a67b809e9a3b005\"" Feb 13 19:12:40.280116 kubelet[1773]: E0213 19:12:40.280075 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:40.540093 kubelet[1773]: E0213 19:12:40.539816 1773 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5f7f" podUID="1fca2c23-28a9-4065-bfd1-1c47f655c46e" Feb 13 19:12:40.661705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143854679.mount: Deactivated successfully. Feb 13 19:12:40.743665 containerd[1454]: time="2025-02-13T19:12:40.743294182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:40.744110 containerd[1454]: time="2025-02-13T19:12:40.744062876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 19:12:40.744503 containerd[1454]: time="2025-02-13T19:12:40.744450680Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:40.746700 containerd[1454]: time="2025-02-13T19:12:40.746644642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:40.747679 containerd[1454]: time="2025-02-13T19:12:40.747554316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.33400023s" Feb 13 19:12:40.747679 containerd[1454]: time="2025-02-13T19:12:40.747593368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:12:40.749097 containerd[1454]: time="2025-02-13T19:12:40.749008524Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:12:40.749790 containerd[1454]: time="2025-02-13T19:12:40.749753235Z" level=info msg="CreateContainer within sandbox \"1ffe2970d90e5e6c4375d31ace6518e08c4803ac835584507c5372fd0116b57f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:12:40.769457 containerd[1454]: time="2025-02-13T19:12:40.769397524Z" level=info msg="CreateContainer within sandbox \"1ffe2970d90e5e6c4375d31ace6518e08c4803ac835584507c5372fd0116b57f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1ab501be61ff18270300ce0fa82bcd3d0c3cb71548727dc3ddc306de95dc0a60\"" Feb 13 19:12:40.770572 containerd[1454]: time="2025-02-13T19:12:40.770535396Z" level=info msg="StartContainer for \"1ab501be61ff18270300ce0fa82bcd3d0c3cb71548727dc3ddc306de95dc0a60\"" Feb 13 19:12:40.798975 systemd[1]: Started cri-containerd-1ab501be61ff18270300ce0fa82bcd3d0c3cb71548727dc3ddc306de95dc0a60.scope - libcontainer container 1ab501be61ff18270300ce0fa82bcd3d0c3cb71548727dc3ddc306de95dc0a60. Feb 13 19:12:40.828446 containerd[1454]: time="2025-02-13T19:12:40.828385874Z" level=info msg="StartContainer for \"1ab501be61ff18270300ce0fa82bcd3d0c3cb71548727dc3ddc306de95dc0a60\" returns successfully" Feb 13 19:12:40.850210 systemd[1]: cri-containerd-1ab501be61ff18270300ce0fa82bcd3d0c3cb71548727dc3ddc306de95dc0a60.scope: Deactivated successfully. Feb 13 19:12:40.892468 containerd[1454]: time="2025-02-13T19:12:40.892385225Z" level=info msg="shim disconnected" id=1ab501be61ff18270300ce0fa82bcd3d0c3cb71548727dc3ddc306de95dc0a60 namespace=k8s.io Feb 13 19:12:40.892468 containerd[1454]: time="2025-02-13T19:12:40.892444463Z" level=warning msg="cleaning up after shim disconnected" id=1ab501be61ff18270300ce0fa82bcd3d0c3cb71548727dc3ddc306de95dc0a60 namespace=k8s.io Feb 13 19:12:40.892468 containerd[1454]: time="2025-02-13T19:12:40.892452017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:12:41.280461 kubelet[1773]: E0213 19:12:41.280331 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:41.637087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ab501be61ff18270300ce0fa82bcd3d0c3cb71548727dc3ddc306de95dc0a60-rootfs.mount: Deactivated successfully. Feb 13 19:12:41.712518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91241143.mount: Deactivated successfully. Feb 13 19:12:41.923003 containerd[1454]: time="2025-02-13T19:12:41.922875660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:41.923835 containerd[1454]: time="2025-02-13T19:12:41.923786714Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:12:41.924608 containerd[1454]: time="2025-02-13T19:12:41.924553906Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:41.926612 containerd[1454]: time="2025-02-13T19:12:41.926578553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:41.927274 containerd[1454]: time="2025-02-13T19:12:41.927143684Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.178103343s" Feb 13 19:12:41.927274 containerd[1454]: time="2025-02-13T19:12:41.927168707Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:12:41.928474 containerd[1454]: time="2025-02-13T19:12:41.928450105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:12:41.929275 containerd[1454]: time="2025-02-13T19:12:41.929164654Z" level=info msg="CreateContainer within sandbox \"f35a1e451c59379b360941e0f531d12abbffab2af50690472a67b809e9a3b005\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:12:41.943469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243746035.mount: Deactivated successfully. Feb 13 19:12:41.947927 containerd[1454]: time="2025-02-13T19:12:41.947882657Z" level=info msg="CreateContainer within sandbox \"f35a1e451c59379b360941e0f531d12abbffab2af50690472a67b809e9a3b005\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0825d1fadf7e54beb37283e1518b298839d522d8aac964e3fa5e5f319499685\"" Feb 13 19:12:41.948597 containerd[1454]: time="2025-02-13T19:12:41.948560191Z" level=info msg="StartContainer for \"f0825d1fadf7e54beb37283e1518b298839d522d8aac964e3fa5e5f319499685\"" Feb 13 19:12:41.974994 systemd[1]: Started cri-containerd-f0825d1fadf7e54beb37283e1518b298839d522d8aac964e3fa5e5f319499685.scope - libcontainer container f0825d1fadf7e54beb37283e1518b298839d522d8aac964e3fa5e5f319499685. Feb 13 19:12:41.998393 containerd[1454]: time="2025-02-13T19:12:41.998354575Z" level=info msg="StartContainer for \"f0825d1fadf7e54beb37283e1518b298839d522d8aac964e3fa5e5f319499685\" returns successfully" Feb 13 19:12:42.280575 kubelet[1773]: E0213 19:12:42.280405 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:42.539593 kubelet[1773]: E0213 19:12:42.539419 1773 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5f7f" podUID="1fca2c23-28a9-4065-bfd1-1c47f655c46e" Feb 13 19:12:42.561732 kubelet[1773]: I0213 19:12:42.561666 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4v9zp" podStartSLOduration=3.050400825 podStartE2EDuration="5.561648696s" podCreationTimestamp="2025-02-13 19:12:37 +0000 UTC" firstStartedPulling="2025-02-13 19:12:39.416678475 +0000 UTC m=+2.913084282" lastFinishedPulling="2025-02-13 19:12:41.927926306 +0000 UTC m=+5.424332153" observedRunningTime="2025-02-13 19:12:42.560912027 +0000 UTC m=+6.057317874" watchObservedRunningTime="2025-02-13 19:12:42.561648696 +0000 UTC m=+6.058054543" Feb 13 19:12:43.281181 kubelet[1773]: E0213 19:12:43.281146 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:43.959272 containerd[1454]: time="2025-02-13T19:12:43.959223966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:43.960080 containerd[1454]: time="2025-02-13T19:12:43.959904486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:12:43.962183 containerd[1454]: time="2025-02-13T19:12:43.961132533Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:43.963162 containerd[1454]: time="2025-02-13T19:12:43.963117092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:43.964299 containerd[1454]: time="2025-02-13T19:12:43.964219420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.035740295s" Feb 13 19:12:43.964299 containerd[1454]: time="2025-02-13T19:12:43.964255837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:12:43.966512 containerd[1454]: time="2025-02-13T19:12:43.966470127Z" level=info msg="CreateContainer within sandbox \"1ffe2970d90e5e6c4375d31ace6518e08c4803ac835584507c5372fd0116b57f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:12:43.977157 containerd[1454]: time="2025-02-13T19:12:43.977076080Z" level=info msg="CreateContainer within sandbox \"1ffe2970d90e5e6c4375d31ace6518e08c4803ac835584507c5372fd0116b57f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c6a56170161ff02abf541bf96ab13312022b1fbee3dfe797ff944e6814bb6475\"" Feb 13 19:12:43.977766 containerd[1454]: time="2025-02-13T19:12:43.977714548Z" level=info msg="StartContainer for \"c6a56170161ff02abf541bf96ab13312022b1fbee3dfe797ff944e6814bb6475\"" Feb 13 19:12:44.003971 systemd[1]: Started cri-containerd-c6a56170161ff02abf541bf96ab13312022b1fbee3dfe797ff944e6814bb6475.scope - libcontainer container c6a56170161ff02abf541bf96ab13312022b1fbee3dfe797ff944e6814bb6475. Feb 13 19:12:44.038364 containerd[1454]: time="2025-02-13T19:12:44.038279764Z" level=info msg="StartContainer for \"c6a56170161ff02abf541bf96ab13312022b1fbee3dfe797ff944e6814bb6475\" returns successfully" Feb 13 19:12:44.282762 kubelet[1773]: E0213 19:12:44.282645 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:44.469926 containerd[1454]: time="2025-02-13T19:12:44.469877442Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:12:44.471742 systemd[1]: cri-containerd-c6a56170161ff02abf541bf96ab13312022b1fbee3dfe797ff944e6814bb6475.scope: Deactivated successfully. Feb 13 19:12:44.472050 systemd[1]: cri-containerd-c6a56170161ff02abf541bf96ab13312022b1fbee3dfe797ff944e6814bb6475.scope: Consumed 429ms CPU time, 165.5M memory peak, 147.4M written to disk. Feb 13 19:12:44.487875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6a56170161ff02abf541bf96ab13312022b1fbee3dfe797ff944e6814bb6475-rootfs.mount: Deactivated successfully. Feb 13 19:12:44.525469 kubelet[1773]: I0213 19:12:44.525423 1773 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:12:44.544164 systemd[1]: Created slice kubepods-besteffort-pod1fca2c23_28a9_4065_bfd1_1c47f655c46e.slice - libcontainer container kubepods-besteffort-pod1fca2c23_28a9_4065_bfd1_1c47f655c46e.slice. Feb 13 19:12:44.546486 containerd[1454]: time="2025-02-13T19:12:44.546452952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:0,}" Feb 13 19:12:44.666808 containerd[1454]: time="2025-02-13T19:12:44.666739924Z" level=info msg="shim disconnected" id=c6a56170161ff02abf541bf96ab13312022b1fbee3dfe797ff944e6814bb6475 namespace=k8s.io Feb 13 19:12:44.666948 containerd[1454]: time="2025-02-13T19:12:44.666797129Z" level=warning msg="cleaning up after shim disconnected" id=c6a56170161ff02abf541bf96ab13312022b1fbee3dfe797ff944e6814bb6475 namespace=k8s.io Feb 13 19:12:44.666948 containerd[1454]: time="2025-02-13T19:12:44.666893189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:12:44.808510 containerd[1454]: time="2025-02-13T19:12:44.808382741Z" level=error msg="Failed to destroy network for sandbox \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:44.808740 containerd[1454]: time="2025-02-13T19:12:44.808701302Z" level=error msg="encountered an error cleaning up failed sandbox \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:44.808895 containerd[1454]: time="2025-02-13T19:12:44.808790646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:44.809272 kubelet[1773]: E0213 19:12:44.809202 1773 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:44.809336 kubelet[1773]: E0213 19:12:44.809297 1773 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:44.809336 kubelet[1773]: E0213 19:12:44.809318 1773 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:44.809401 kubelet[1773]: E0213 19:12:44.809369 1773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f5f7f_calico-system(1fca2c23-28a9-4065-bfd1-1c47f655c46e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f5f7f_calico-system(1fca2c23-28a9-4065-bfd1-1c47f655c46e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f5f7f" podUID="1fca2c23-28a9-4065-bfd1-1c47f655c46e" Feb 13 19:12:45.283322 kubelet[1773]: E0213 19:12:45.283194 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:45.559600 kubelet[1773]: I0213 19:12:45.559496 1773 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a" Feb 13 19:12:45.560611 containerd[1454]: time="2025-02-13T19:12:45.560515038Z" level=info msg="StopPodSandbox for \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\"" Feb 13 19:12:45.560917 containerd[1454]: time="2025-02-13T19:12:45.560847876Z" level=info msg="Ensure that sandbox 37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a in task-service has been cleanup successfully" Feb 13 19:12:45.561861 containerd[1454]: time="2025-02-13T19:12:45.561787867Z" level=info msg="TearDown network for sandbox \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\" successfully" Feb 13 19:12:45.561861 containerd[1454]: time="2025-02-13T19:12:45.561832480Z" level=info msg="StopPodSandbox for \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\" returns successfully" Feb 13 19:12:45.562365 systemd[1]: run-netns-cni\x2d8fad29a5\x2d9788\x2d39fb\x2d4a0c\x2d55b25fdaafcb.mount: Deactivated successfully. Feb 13 19:12:45.562598 containerd[1454]: time="2025-02-13T19:12:45.562364597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:1,}" Feb 13 19:12:45.565371 containerd[1454]: time="2025-02-13T19:12:45.565346111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:12:45.614788 containerd[1454]: time="2025-02-13T19:12:45.614715441Z" level=error msg="Failed to destroy network for sandbox \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:45.615080 containerd[1454]: time="2025-02-13T19:12:45.615041403Z" level=error msg="encountered an error cleaning up failed sandbox \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:45.615115 containerd[1454]: time="2025-02-13T19:12:45.615100048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:45.615712 kubelet[1773]: E0213 19:12:45.615337 1773 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:45.615712 kubelet[1773]: E0213 19:12:45.615401 1773 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:45.615712 kubelet[1773]: E0213 19:12:45.615422 1773 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:45.615912 kubelet[1773]: E0213 19:12:45.615460 1773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f5f7f_calico-system(1fca2c23-28a9-4065-bfd1-1c47f655c46e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f5f7f_calico-system(1fca2c23-28a9-4065-bfd1-1c47f655c46e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f5f7f" podUID="1fca2c23-28a9-4065-bfd1-1c47f655c46e" Feb 13 19:12:45.616168 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1-shm.mount: Deactivated successfully. Feb 13 19:12:46.284165 kubelet[1773]: E0213 19:12:46.284118 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:46.569348 kubelet[1773]: I0213 19:12:46.566964 1773 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1" Feb 13 19:12:46.569455 containerd[1454]: time="2025-02-13T19:12:46.567670373Z" level=info msg="StopPodSandbox for \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\"" Feb 13 19:12:46.569455 containerd[1454]: time="2025-02-13T19:12:46.567842152Z" level=info msg="Ensure that sandbox 1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1 in task-service has been cleanup successfully" Feb 13 19:12:46.569324 systemd[1]: run-netns-cni\x2ddc85eab7\x2d7f80\x2d01a6\x2d8258\x2d8ca3848eccfa.mount: Deactivated successfully. Feb 13 19:12:46.569985 containerd[1454]: time="2025-02-13T19:12:46.569954232Z" level=info msg="TearDown network for sandbox \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\" successfully" Feb 13 19:12:46.569985 containerd[1454]: time="2025-02-13T19:12:46.569982496Z" level=info msg="StopPodSandbox for \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\" returns successfully" Feb 13 19:12:46.570340 containerd[1454]: time="2025-02-13T19:12:46.570317539Z" level=info msg="StopPodSandbox for \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\"" Feb 13 19:12:46.570401 containerd[1454]: time="2025-02-13T19:12:46.570391616Z" level=info msg="TearDown network for sandbox \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\" successfully" Feb 13 19:12:46.570427 containerd[1454]: time="2025-02-13T19:12:46.570401330Z" level=info msg="StopPodSandbox for \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\" returns successfully" Feb 13 19:12:46.571147 containerd[1454]: time="2025-02-13T19:12:46.571123266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:2,}" Feb 13 19:12:46.655662 containerd[1454]: time="2025-02-13T19:12:46.655603684Z" level=error msg="Failed to destroy network for sandbox \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:46.657628 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f-shm.mount: Deactivated successfully. Feb 13 19:12:46.658510 containerd[1454]: time="2025-02-13T19:12:46.658475519Z" level=error msg="encountered an error cleaning up failed sandbox \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:46.658560 containerd[1454]: time="2025-02-13T19:12:46.658545238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:46.658886 kubelet[1773]: E0213 19:12:46.658758 1773 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:46.658886 kubelet[1773]: E0213 19:12:46.658831 1773 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:46.658886 kubelet[1773]: E0213 19:12:46.658852 1773 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:46.659382 kubelet[1773]: E0213 19:12:46.659145 1773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f5f7f_calico-system(1fca2c23-28a9-4065-bfd1-1c47f655c46e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f5f7f_calico-system(1fca2c23-28a9-4065-bfd1-1c47f655c46e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f5f7f" podUID="1fca2c23-28a9-4065-bfd1-1c47f655c46e" Feb 13 19:12:47.284970 kubelet[1773]: E0213 19:12:47.284835 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:47.570077 kubelet[1773]: I0213 19:12:47.569975 1773 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f" Feb 13 19:12:47.570770 containerd[1454]: time="2025-02-13T19:12:47.570624175Z" level=info msg="StopPodSandbox for \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\"" Feb 13 19:12:47.571664 containerd[1454]: time="2025-02-13T19:12:47.571045056Z" level=info msg="Ensure that sandbox 051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f in task-service has been cleanup successfully" Feb 13 19:12:47.571664 containerd[1454]: time="2025-02-13T19:12:47.571220916Z" level=info msg="TearDown network for sandbox \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\" successfully" Feb 13 19:12:47.571664 containerd[1454]: time="2025-02-13T19:12:47.571235468Z" level=info msg="StopPodSandbox for \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\" returns successfully" Feb 13 19:12:47.571664 containerd[1454]: time="2025-02-13T19:12:47.571501836Z" level=info msg="StopPodSandbox for \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\"" Feb 13 19:12:47.571664 containerd[1454]: time="2025-02-13T19:12:47.571568838Z" level=info msg="TearDown network for sandbox \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\" successfully" Feb 13 19:12:47.571664 containerd[1454]: time="2025-02-13T19:12:47.571578753Z" level=info msg="StopPodSandbox for \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\" returns successfully" Feb 13 19:12:47.572275 containerd[1454]: time="2025-02-13T19:12:47.572252410Z" level=info msg="StopPodSandbox for \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\"" Feb 13 19:12:47.572341 containerd[1454]: time="2025-02-13T19:12:47.572326128Z" level=info msg="TearDown network for sandbox \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\" successfully" Feb 13 19:12:47.572341 containerd[1454]: time="2025-02-13T19:12:47.572336002Z" level=info msg="StopPodSandbox for \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\" returns successfully" Feb 13 19:12:47.573321 containerd[1454]: time="2025-02-13T19:12:47.573126792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:3,}" Feb 13 19:12:47.573657 systemd[1]: run-netns-cni\x2d78f6fd7c\x2ddb62\x2d6261\x2d84c7\x2de96f83cd39bc.mount: Deactivated successfully. Feb 13 19:12:47.640331 containerd[1454]: time="2025-02-13T19:12:47.640279293Z" level=error msg="Failed to destroy network for sandbox \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:47.641927 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08-shm.mount: Deactivated successfully. Feb 13 19:12:47.642753 containerd[1454]: time="2025-02-13T19:12:47.642701516Z" level=error msg="encountered an error cleaning up failed sandbox \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:47.642807 containerd[1454]: time="2025-02-13T19:12:47.642773795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:47.643171 kubelet[1773]: E0213 19:12:47.643137 1773 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:47.643287 kubelet[1773]: E0213 19:12:47.643195 1773 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:47.643287 kubelet[1773]: E0213 19:12:47.643217 1773 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:47.643287 kubelet[1773]: E0213 19:12:47.643261 1773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f5f7f_calico-system(1fca2c23-28a9-4065-bfd1-1c47f655c46e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f5f7f_calico-system(1fca2c23-28a9-4065-bfd1-1c47f655c46e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f5f7f" podUID="1fca2c23-28a9-4065-bfd1-1c47f655c46e" Feb 13 19:12:48.285028 kubelet[1773]: E0213 19:12:48.284987 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:48.574667 kubelet[1773]: I0213 19:12:48.574554 1773 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08" Feb 13 19:12:48.575469 containerd[1454]: time="2025-02-13T19:12:48.575429468Z" level=info msg="StopPodSandbox for \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\"" Feb 13 19:12:48.576166 containerd[1454]: time="2025-02-13T19:12:48.576023101Z" level=info msg="Ensure that sandbox 0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08 in task-service has been cleanup successfully" Feb 13 19:12:48.576290 containerd[1454]: time="2025-02-13T19:12:48.576267926Z" level=info msg="TearDown network for sandbox \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\" successfully" Feb 13 19:12:48.576553 containerd[1454]: time="2025-02-13T19:12:48.576345883Z" level=info msg="StopPodSandbox for \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\" returns successfully" Feb 13 19:12:48.577635 systemd[1]: run-netns-cni\x2d8ab34e87\x2d8e9b\x2d7895\x2d47cf\x2d18d922fd9c7c.mount: Deactivated successfully. Feb 13 19:12:48.578387 containerd[1454]: time="2025-02-13T19:12:48.577917777Z" level=info msg="StopPodSandbox for \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\"" Feb 13 19:12:48.578387 containerd[1454]: time="2025-02-13T19:12:48.578001171Z" level=info msg="TearDown network for sandbox \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\" successfully" Feb 13 19:12:48.578387 containerd[1454]: time="2025-02-13T19:12:48.578014724Z" level=info msg="StopPodSandbox for \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\" returns successfully" Feb 13 19:12:48.578859 containerd[1454]: time="2025-02-13T19:12:48.578768149Z" level=info msg="StopPodSandbox for \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\"" Feb 13 19:12:48.579173 containerd[1454]: time="2025-02-13T19:12:48.579147420Z" level=info msg="TearDown network for sandbox \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\" successfully" Feb 13 19:12:48.579173 containerd[1454]: time="2025-02-13T19:12:48.579164890Z" level=info msg="StopPodSandbox for \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\" returns successfully" Feb 13 19:12:48.579870 containerd[1454]: time="2025-02-13T19:12:48.579640948Z" level=info msg="StopPodSandbox for \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\"" Feb 13 19:12:48.579870 containerd[1454]: time="2025-02-13T19:12:48.579802779Z" level=info msg="TearDown network for sandbox \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\" successfully" Feb 13 19:12:48.580496 containerd[1454]: time="2025-02-13T19:12:48.580202998Z" level=info msg="StopPodSandbox for \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\" returns successfully" Feb 13 19:12:48.580677 containerd[1454]: time="2025-02-13T19:12:48.580650072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:4,}" Feb 13 19:12:48.622642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253793935.mount: Deactivated successfully. Feb 13 19:12:48.780251 containerd[1454]: time="2025-02-13T19:12:48.780204883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:48.782324 containerd[1454]: time="2025-02-13T19:12:48.782267107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:12:48.787569 containerd[1454]: time="2025-02-13T19:12:48.787524291Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:48.791792 containerd[1454]: time="2025-02-13T19:12:48.791716662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:48.797075 containerd[1454]: time="2025-02-13T19:12:48.793278682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.22777362s" Feb 13 19:12:48.797075 containerd[1454]: time="2025-02-13T19:12:48.793317980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:12:48.803725 containerd[1454]: time="2025-02-13T19:12:48.803676955Z" level=info msg="CreateContainer within sandbox \"1ffe2970d90e5e6c4375d31ace6518e08c4803ac835584507c5372fd0116b57f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:12:48.831905 containerd[1454]: time="2025-02-13T19:12:48.831215268Z" level=info msg="CreateContainer within sandbox \"1ffe2970d90e5e6c4375d31ace6518e08c4803ac835584507c5372fd0116b57f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f923c4be6e160762f5bd2760ed40a144482b4ca433cefb6816daec55e94a4f29\"" Feb 13 19:12:48.832201 containerd[1454]: time="2025-02-13T19:12:48.832174619Z" level=info msg="StartContainer for \"f923c4be6e160762f5bd2760ed40a144482b4ca433cefb6816daec55e94a4f29\"" Feb 13 19:12:48.834452 containerd[1454]: time="2025-02-13T19:12:48.834402592Z" level=error msg="Failed to destroy network for sandbox \"f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:48.834757 containerd[1454]: time="2025-02-13T19:12:48.834731211Z" level=error msg="encountered an error cleaning up failed sandbox \"f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:48.834810 containerd[1454]: time="2025-02-13T19:12:48.834789299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:48.835424 kubelet[1773]: E0213 19:12:48.835072 1773 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:12:48.835424 kubelet[1773]: E0213 19:12:48.835135 1773 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:48.835424 kubelet[1773]: E0213 19:12:48.835155 1773 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f5f7f" Feb 13 19:12:48.835623 kubelet[1773]: E0213 19:12:48.835196 1773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f5f7f_calico-system(1fca2c23-28a9-4065-bfd1-1c47f655c46e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f5f7f_calico-system(1fca2c23-28a9-4065-bfd1-1c47f655c46e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f5f7f" podUID="1fca2c23-28a9-4065-bfd1-1c47f655c46e" Feb 13 19:12:48.860043 systemd[1]: Started cri-containerd-f923c4be6e160762f5bd2760ed40a144482b4ca433cefb6816daec55e94a4f29.scope - libcontainer container f923c4be6e160762f5bd2760ed40a144482b4ca433cefb6816daec55e94a4f29. Feb 13 19:12:48.888121 containerd[1454]: time="2025-02-13T19:12:48.888037611Z" level=info msg="StartContainer for \"f923c4be6e160762f5bd2760ed40a144482b4ca433cefb6816daec55e94a4f29\" returns successfully" Feb 13 19:12:49.036688 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:12:49.036973 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:12:49.211628 kubelet[1773]: I0213 19:12:49.211491 1773 topology_manager.go:215] "Topology Admit Handler" podUID="a6e70394-5ac5-4d3f-a98b-91681ee6dff2" podNamespace="default" podName="nginx-deployment-85f456d6dd-wq6vc" Feb 13 19:12:49.217571 systemd[1]: Created slice kubepods-besteffort-poda6e70394_5ac5_4d3f_a98b_91681ee6dff2.slice - libcontainer container kubepods-besteffort-poda6e70394_5ac5_4d3f_a98b_91681ee6dff2.slice. Feb 13 19:12:49.287685 kubelet[1773]: E0213 19:12:49.287611 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:49.352960 kubelet[1773]: I0213 19:12:49.352919 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn629\" (UniqueName: \"kubernetes.io/projected/a6e70394-5ac5-4d3f-a98b-91681ee6dff2-kube-api-access-kn629\") pod \"nginx-deployment-85f456d6dd-wq6vc\" (UID: \"a6e70394-5ac5-4d3f-a98b-91681ee6dff2\") " pod="default/nginx-deployment-85f456d6dd-wq6vc" Feb 13 19:12:49.520594 containerd[1454]: time="2025-02-13T19:12:49.520482107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wq6vc,Uid:a6e70394-5ac5-4d3f-a98b-91681ee6dff2,Namespace:default,Attempt:0,}" Feb 13 19:12:49.579081 kubelet[1773]: I0213 19:12:49.579043 1773 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4" Feb 13 19:12:49.584637 containerd[1454]: time="2025-02-13T19:12:49.579748925Z" level=info msg="StopPodSandbox for \"f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4\"" Feb 13 19:12:49.584637 containerd[1454]: time="2025-02-13T19:12:49.579947379Z" level=info msg="Ensure that sandbox f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4 in task-service has been cleanup successfully" Feb 13 19:12:49.584637 containerd[1454]: time="2025-02-13T19:12:49.583369633Z" level=info msg="TearDown network for sandbox \"f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4\" successfully" Feb 13 19:12:49.584637 containerd[1454]: time="2025-02-13T19:12:49.583393780Z" level=info msg="StopPodSandbox for \"f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4\" returns successfully" Feb 13 19:12:49.584637 containerd[1454]: time="2025-02-13T19:12:49.583780614Z" level=info msg="StopPodSandbox for \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\"" Feb 13 19:12:49.584637 containerd[1454]: time="2025-02-13T19:12:49.583888676Z" level=info msg="TearDown network for sandbox \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\" successfully" Feb 13 19:12:49.584637 containerd[1454]: time="2025-02-13T19:12:49.583899510Z" level=info msg="StopPodSandbox for \"0978c4ded6e2b628ff73db17e4fe164b62acbcdc410c01f2963472b651c61c08\" returns successfully" Feb 13 19:12:49.584637 containerd[1454]: time="2025-02-13T19:12:49.584359225Z" level=info msg="StopPodSandbox for \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\"" Feb 13 19:12:49.584637 containerd[1454]: time="2025-02-13T19:12:49.584433265Z" level=info msg="TearDown network for sandbox \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\" successfully" Feb 13 19:12:49.584637 containerd[1454]: time="2025-02-13T19:12:49.584444140Z" level=info msg="StopPodSandbox for \"051c0d65478fc9a5cb21bb05a4f0c9ff7052dc68e5ebf7f0f30255e29a56b06f\" returns successfully" Feb 13 19:12:49.580793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f21a2ca4664f70a6cbf3fc0e4ead13cadccf4fbf0c56dd3dc21ecab284c961b4-shm.mount: Deactivated successfully. Feb 13 19:12:49.585271 containerd[1454]: time="2025-02-13T19:12:49.584877828Z" level=info msg="StopPodSandbox for \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\"" Feb 13 19:12:49.585271 containerd[1454]: time="2025-02-13T19:12:49.584977335Z" level=info msg="TearDown network for sandbox \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\" successfully" Feb 13 19:12:49.584485 systemd[1]: run-netns-cni\x2def7329ec\x2d8d07\x2d75b8\x2d69ef\x2d042402a288dd.mount: Deactivated successfully. Feb 13 19:12:49.585879 containerd[1454]: time="2025-02-13T19:12:49.585843513Z" level=info msg="StopPodSandbox for \"1e1bd0520db04e6101090c0c77089501b58c8a82170fde2224d18a861e1d4df1\" returns successfully" Feb 13 19:12:49.586994 containerd[1454]: time="2025-02-13T19:12:49.586953681Z" level=info msg="StopPodSandbox for \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\"" Feb 13 19:12:49.587085 containerd[1454]: time="2025-02-13T19:12:49.587049749Z" level=info msg="TearDown network for sandbox \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\" successfully" Feb 13 19:12:49.587085 containerd[1454]: time="2025-02-13T19:12:49.587064981Z" level=info msg="StopPodSandbox for \"37bcf4e43f61e473a0c6b646cb8c43d04415cc179f241570cc8cc0575007ba1a\" returns successfully" Feb 13 19:12:49.587868 containerd[1454]: time="2025-02-13T19:12:49.587629800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:5,}" Feb 13 19:12:49.706161 systemd-networkd[1382]: cali4c60cd99ecb: Link UP Feb 13 19:12:49.707115 systemd-networkd[1382]: cali4c60cd99ecb: Gained carrier Feb 13 19:12:49.718364 kubelet[1773]: I0213 19:12:49.718311 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cdt7w" podStartSLOduration=3.335072593 podStartE2EDuration="12.718268617s" podCreationTimestamp="2025-02-13 19:12:37 +0000 UTC" firstStartedPulling="2025-02-13 19:12:39.413005888 +0000 UTC m=+2.909411735" lastFinishedPulling="2025-02-13 19:12:48.796201912 +0000 UTC m=+12.292607759" observedRunningTime="2025-02-13 19:12:49.608806581 +0000 UTC m=+13.105212428" watchObservedRunningTime="2025-02-13 19:12:49.718268617 +0000 UTC m=+13.214674464" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.547 [INFO][2491] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.567 [INFO][2491] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0 nginx-deployment-85f456d6dd- default a6e70394-5ac5-4d3f-a98b-91681ee6dff2 923 0 2025-02-13 19:12:49 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.88 nginx-deployment-85f456d6dd-wq6vc eth0 default [] [] [kns.default ksa.default.default] cali4c60cd99ecb [] []}} ContainerID="094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" Namespace="default" Pod="nginx-deployment-85f456d6dd-wq6vc" WorkloadEndpoint="10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.567 [INFO][2491] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" Namespace="default" Pod="nginx-deployment-85f456d6dd-wq6vc" WorkloadEndpoint="10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.651 [INFO][2503] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" HandleID="k8s-pod-network.094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" Workload="10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.663 [INFO][2503] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" HandleID="k8s-pod-network.094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" Workload="10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400043bda0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.88", "pod":"nginx-deployment-85f456d6dd-wq6vc", "timestamp":"2025-02-13 19:12:49.651051241 +0000 UTC"}, Hostname:"10.0.0.88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.663 [INFO][2503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.663 [INFO][2503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.663 [INFO][2503] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.88' Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.665 [INFO][2503] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" host="10.0.0.88" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.672 [INFO][2503] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.88" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.676 [INFO][2503] ipam/ipam.go 489: Trying affinity for 192.168.125.64/26 host="10.0.0.88" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.679 [INFO][2503] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.64/26 host="10.0.0.88" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.681 [INFO][2503] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.64/26 host="10.0.0.88" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.681 [INFO][2503] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.64/26 handle="k8s-pod-network.094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" host="10.0.0.88" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.687 [INFO][2503] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.693 [INFO][2503] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.64/26 handle="k8s-pod-network.094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" host="10.0.0.88" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.698 [INFO][2503] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.65/26] block=192.168.125.64/26 handle="k8s-pod-network.094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" host="10.0.0.88" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.698 [INFO][2503] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.65/26] handle="k8s-pod-network.094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" host="10.0.0.88" Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.698 [INFO][2503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:12:49.719922 containerd[1454]: 2025-02-13 19:12:49.698 [INFO][2503] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.65/26] IPv6=[] ContainerID="094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" HandleID="k8s-pod-network.094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" Workload="10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0" Feb 13 19:12:49.720543 containerd[1454]: 2025-02-13 19:12:49.700 [INFO][2491] cni-plugin/k8s.go 386: Populated endpoint ContainerID="094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" Namespace="default" Pod="nginx-deployment-85f456d6dd-wq6vc" WorkloadEndpoint="10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"a6e70394-5ac5-4d3f-a98b-91681ee6dff2", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 12, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.88", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-wq6vc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4c60cd99ecb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:12:49.720543 containerd[1454]: 2025-02-13 19:12:49.700 [INFO][2491] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.65/32] ContainerID="094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" Namespace="default" Pod="nginx-deployment-85f456d6dd-wq6vc" WorkloadEndpoint="10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0" Feb 13 19:12:49.720543 containerd[1454]: 2025-02-13 19:12:49.700 [INFO][2491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c60cd99ecb ContainerID="094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" Namespace="default" Pod="nginx-deployment-85f456d6dd-wq6vc" WorkloadEndpoint="10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0" Feb 13 19:12:49.720543 containerd[1454]: 2025-02-13 19:12:49.707 [INFO][2491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" Namespace="default" Pod="nginx-deployment-85f456d6dd-wq6vc" WorkloadEndpoint="10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0" Feb 13 19:12:49.720543 containerd[1454]: 2025-02-13 19:12:49.707 [INFO][2491] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" Namespace="default" Pod="nginx-deployment-85f456d6dd-wq6vc" WorkloadEndpoint="10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"a6e70394-5ac5-4d3f-a98b-91681ee6dff2", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 12, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.88", ContainerID:"094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea", Pod:"nginx-deployment-85f456d6dd-wq6vc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4c60cd99ecb", MAC:"76:65:5e:24:0c:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:12:49.720543 containerd[1454]: 2025-02-13 19:12:49.718 [INFO][2491] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea" Namespace="default" Pod="nginx-deployment-85f456d6dd-wq6vc" WorkloadEndpoint="10.0.0.88-k8s-nginx--deployment--85f456d6dd--wq6vc-eth0" Feb 13 19:12:49.736631 containerd[1454]: time="2025-02-13T19:12:49.736400583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:12:49.736631 containerd[1454]: time="2025-02-13T19:12:49.736468187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:12:49.736631 containerd[1454]: time="2025-02-13T19:12:49.736484018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:12:49.736631 containerd[1454]: time="2025-02-13T19:12:49.736555780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:12:49.738949 systemd-networkd[1382]: cali0bdfeb7967a: Link UP Feb 13 19:12:49.739097 systemd-networkd[1382]: cali0bdfeb7967a: Gained carrier Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.632 [INFO][2513] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.646 [INFO][2513] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.88-k8s-csi--node--driver--f5f7f-eth0 csi-node-driver- calico-system 1fca2c23-28a9-4065-bfd1-1c47f655c46e 683 0 2025-02-13 19:12:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.88 csi-node-driver-f5f7f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0bdfeb7967a [] []}} ContainerID="41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" Namespace="calico-system" Pod="csi-node-driver-f5f7f" WorkloadEndpoint="10.0.0.88-k8s-csi--node--driver--f5f7f-" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.646 [INFO][2513] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" Namespace="calico-system" Pod="csi-node-driver-f5f7f" WorkloadEndpoint="10.0.0.88-k8s-csi--node--driver--f5f7f-eth0" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.672 [INFO][2524] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" HandleID="k8s-pod-network.41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" Workload="10.0.0.88-k8s-csi--node--driver--f5f7f-eth0" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.693 [INFO][2524] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" HandleID="k8s-pod-network.41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" Workload="10.0.0.88-k8s-csi--node--driver--f5f7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000287150), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.88", "pod":"csi-node-driver-f5f7f", "timestamp":"2025-02-13 19:12:49.672843014 +0000 UTC"}, Hostname:"10.0.0.88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.693 [INFO][2524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.698 [INFO][2524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.698 [INFO][2524] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.88' Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.701 [INFO][2524] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" host="10.0.0.88" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.706 [INFO][2524] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.88" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.712 [INFO][2524] ipam/ipam.go 489: Trying affinity for 192.168.125.64/26 host="10.0.0.88" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.714 [INFO][2524] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.64/26 host="10.0.0.88" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.719 [INFO][2524] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.64/26 host="10.0.0.88" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.719 [INFO][2524] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.64/26 handle="k8s-pod-network.41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" host="10.0.0.88" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.721 [INFO][2524] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132 Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.726 [INFO][2524] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.64/26 handle="k8s-pod-network.41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" host="10.0.0.88" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.734 [INFO][2524] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.66/26] block=192.168.125.64/26 handle="k8s-pod-network.41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" host="10.0.0.88" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.734 [INFO][2524] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.66/26] handle="k8s-pod-network.41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" host="10.0.0.88" Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.734 [INFO][2524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:12:49.756563 containerd[1454]: 2025-02-13 19:12:49.734 [INFO][2524] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.66/26] IPv6=[] ContainerID="41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" HandleID="k8s-pod-network.41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" Workload="10.0.0.88-k8s-csi--node--driver--f5f7f-eth0" Feb 13 19:12:49.757256 containerd[1454]: 2025-02-13 19:12:49.736 [INFO][2513] cni-plugin/k8s.go 386: Populated endpoint ContainerID="41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" Namespace="calico-system" Pod="csi-node-driver-f5f7f" WorkloadEndpoint="10.0.0.88-k8s-csi--node--driver--f5f7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.88-k8s-csi--node--driver--f5f7f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1fca2c23-28a9-4065-bfd1-1c47f655c46e", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 12, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.88", ContainerID:"", Pod:"csi-node-driver-f5f7f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0bdfeb7967a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:12:49.757256 containerd[1454]: 2025-02-13 19:12:49.736 [INFO][2513] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.66/32] ContainerID="41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" Namespace="calico-system" Pod="csi-node-driver-f5f7f" WorkloadEndpoint="10.0.0.88-k8s-csi--node--driver--f5f7f-eth0" Feb 13 19:12:49.757256 containerd[1454]: 2025-02-13 19:12:49.736 [INFO][2513] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0bdfeb7967a ContainerID="41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" Namespace="calico-system" Pod="csi-node-driver-f5f7f" WorkloadEndpoint="10.0.0.88-k8s-csi--node--driver--f5f7f-eth0" Feb 13 19:12:49.757256 containerd[1454]: 2025-02-13 19:12:49.738 [INFO][2513] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" Namespace="calico-system" Pod="csi-node-driver-f5f7f" WorkloadEndpoint="10.0.0.88-k8s-csi--node--driver--f5f7f-eth0" Feb 13 19:12:49.757256 containerd[1454]: 2025-02-13 19:12:49.739 [INFO][2513] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" Namespace="calico-system" Pod="csi-node-driver-f5f7f" WorkloadEndpoint="10.0.0.88-k8s-csi--node--driver--f5f7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.88-k8s-csi--node--driver--f5f7f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1fca2c23-28a9-4065-bfd1-1c47f655c46e", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 12, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.88", ContainerID:"41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132", Pod:"csi-node-driver-f5f7f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0bdfeb7967a", MAC:"12:a2:8d:ab:31:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:12:49.757256 containerd[1454]: 2025-02-13 19:12:49.754 [INFO][2513] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132" Namespace="calico-system" Pod="csi-node-driver-f5f7f" WorkloadEndpoint="10.0.0.88-k8s-csi--node--driver--f5f7f-eth0" Feb 13 19:12:49.758018 systemd[1]: Started cri-containerd-094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea.scope - libcontainer container 094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea. Feb 13 19:12:49.770377 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:12:49.777570 containerd[1454]: time="2025-02-13T19:12:49.777404145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:12:49.777570 containerd[1454]: time="2025-02-13T19:12:49.777465273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:12:49.777570 containerd[1454]: time="2025-02-13T19:12:49.777479025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:12:49.778532 containerd[1454]: time="2025-02-13T19:12:49.777551227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:12:49.788288 containerd[1454]: time="2025-02-13T19:12:49.788245481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wq6vc,Uid:a6e70394-5ac5-4d3f-a98b-91681ee6dff2,Namespace:default,Attempt:0,} returns sandbox id \"094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea\"" Feb 13 19:12:49.790391 containerd[1454]: time="2025-02-13T19:12:49.790106808Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:12:49.804033 systemd[1]: Started cri-containerd-41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132.scope - libcontainer container 41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132. Feb 13 19:12:49.812827 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:12:49.822114 containerd[1454]: time="2025-02-13T19:12:49.822074471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5f7f,Uid:1fca2c23-28a9-4065-bfd1-1c47f655c46e,Namespace:calico-system,Attempt:5,} returns sandbox id \"41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132\"" Feb 13 19:12:50.289206 kubelet[1773]: E0213 19:12:50.289149 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:50.441850 kernel: bpftool[2762]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:12:50.592156 kubelet[1773]: I0213 19:12:50.592064 1773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:12:50.594798 systemd-networkd[1382]: vxlan.calico: Link UP Feb 13 19:12:50.594804 systemd-networkd[1382]: vxlan.calico: Gained carrier Feb 13 19:12:51.220952 systemd-networkd[1382]: cali4c60cd99ecb: Gained IPv6LL Feb 13 19:12:51.290622 kubelet[1773]: E0213 19:12:51.290583 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:51.684489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352170575.mount: Deactivated successfully. Feb 13 19:12:51.733090 systemd-networkd[1382]: cali0bdfeb7967a: Gained IPv6LL Feb 13 19:12:52.291778 kubelet[1773]: E0213 19:12:52.291728 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:52.390959 containerd[1454]: time="2025-02-13T19:12:52.390905841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:52.391422 containerd[1454]: time="2025-02-13T19:12:52.391375413Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:12:52.394950 containerd[1454]: time="2025-02-13T19:12:52.394644267Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:52.397926 containerd[1454]: time="2025-02-13T19:12:52.397874221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:52.399033 containerd[1454]: time="2025-02-13T19:12:52.398960654Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 2.608816346s" Feb 13 19:12:52.399033 containerd[1454]: time="2025-02-13T19:12:52.398996316Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:12:52.400564 containerd[1454]: time="2025-02-13T19:12:52.400528573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:12:52.401705 containerd[1454]: time="2025-02-13T19:12:52.401676337Z" level=info msg="CreateContainer within sandbox \"094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:12:52.411038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337662557.mount: Deactivated successfully. Feb 13 19:12:52.411436 containerd[1454]: time="2025-02-13T19:12:52.411111880Z" level=info msg="CreateContainer within sandbox \"094d2e20c550a401b240f7cc2465b0d071a121832b025df2c9eb3048713c4aea\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"dcebbd17df650048d48535e33eb1324937c60ad98b02a9758b93a97275a1ac09\"" Feb 13 19:12:52.412073 containerd[1454]: time="2025-02-13T19:12:52.411643742Z" level=info msg="StartContainer for \"dcebbd17df650048d48535e33eb1324937c60ad98b02a9758b93a97275a1ac09\"" Feb 13 19:12:52.494006 systemd[1]: Started cri-containerd-dcebbd17df650048d48535e33eb1324937c60ad98b02a9758b93a97275a1ac09.scope - libcontainer container dcebbd17df650048d48535e33eb1324937c60ad98b02a9758b93a97275a1ac09. Feb 13 19:12:52.590283 containerd[1454]: time="2025-02-13T19:12:52.590139401Z" level=info msg="StartContainer for \"dcebbd17df650048d48535e33eb1324937c60ad98b02a9758b93a97275a1ac09\" returns successfully" Feb 13 19:12:52.605593 kubelet[1773]: I0213 19:12:52.605468 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-wq6vc" podStartSLOduration=0.9946102 podStartE2EDuration="3.605451494s" podCreationTimestamp="2025-02-13 19:12:49 +0000 UTC" firstStartedPulling="2025-02-13 19:12:49.789523399 +0000 UTC m=+13.285929246" lastFinishedPulling="2025-02-13 19:12:52.400364613 +0000 UTC m=+15.896770540" observedRunningTime="2025-02-13 19:12:52.605304286 +0000 UTC m=+16.101710133" watchObservedRunningTime="2025-02-13 19:12:52.605451494 +0000 UTC m=+16.101857341" Feb 13 19:12:52.629071 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL Feb 13 19:12:53.292088 kubelet[1773]: E0213 19:12:53.292016 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:53.543214 containerd[1454]: time="2025-02-13T19:12:53.543098990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:53.543736 containerd[1454]: time="2025-02-13T19:12:53.543687113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:12:53.544492 containerd[1454]: time="2025-02-13T19:12:53.544437561Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:53.546446 containerd[1454]: time="2025-02-13T19:12:53.546395841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:53.547163 containerd[1454]: time="2025-02-13T19:12:53.547136653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.146573776s" Feb 13 19:12:53.547232 containerd[1454]: time="2025-02-13T19:12:53.547165639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:12:53.549029 containerd[1454]: time="2025-02-13T19:12:53.549001176Z" level=info msg="CreateContainer within sandbox \"41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:12:53.576122 containerd[1454]: time="2025-02-13T19:12:53.576073575Z" level=info msg="CreateContainer within sandbox \"41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e3ac27a78deff0f985aa6e26665a13ee82d09c971bb9d502d3a2b172a1059a89\"" Feb 13 19:12:53.576796 containerd[1454]: time="2025-02-13T19:12:53.576675972Z" level=info msg="StartContainer for \"e3ac27a78deff0f985aa6e26665a13ee82d09c971bb9d502d3a2b172a1059a89\"" Feb 13 19:12:53.602961 systemd[1]: Started cri-containerd-e3ac27a78deff0f985aa6e26665a13ee82d09c971bb9d502d3a2b172a1059a89.scope - libcontainer container e3ac27a78deff0f985aa6e26665a13ee82d09c971bb9d502d3a2b172a1059a89. Feb 13 19:12:53.641912 containerd[1454]: time="2025-02-13T19:12:53.637668073Z" level=info msg="StartContainer for \"e3ac27a78deff0f985aa6e26665a13ee82d09c971bb9d502d3a2b172a1059a89\" returns successfully" Feb 13 19:12:53.641912 containerd[1454]: time="2025-02-13T19:12:53.638653290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:12:54.292424 kubelet[1773]: E0213 19:12:54.292375 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:54.653210 containerd[1454]: time="2025-02-13T19:12:54.653035465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:54.654123 containerd[1454]: time="2025-02-13T19:12:54.654080110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:12:54.654919 containerd[1454]: time="2025-02-13T19:12:54.654883344Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:54.657003 containerd[1454]: time="2025-02-13T19:12:54.656971154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:12:54.657837 containerd[1454]: time="2025-02-13T19:12:54.657791420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.019112742s" Feb 13 19:12:54.657876 containerd[1454]: time="2025-02-13T19:12:54.657845596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:12:54.659874 containerd[1454]: time="2025-02-13T19:12:54.659815819Z" level=info msg="CreateContainer within sandbox \"41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:12:54.672120 containerd[1454]: time="2025-02-13T19:12:54.672076558Z" level=info msg="CreateContainer within sandbox \"41fde6a33bd2f99c101747c82dddb15d79f0493cce6fe152d8b048538f449132\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"36f754cb9b5db698c59a27d53264e670aca035da1206d9ca1a2db602511ad812\"" Feb 13 19:12:54.672727 containerd[1454]: time="2025-02-13T19:12:54.672689879Z" level=info msg="StartContainer for \"36f754cb9b5db698c59a27d53264e670aca035da1206d9ca1a2db602511ad812\"" Feb 13 19:12:54.716982 systemd[1]: Started cri-containerd-36f754cb9b5db698c59a27d53264e670aca035da1206d9ca1a2db602511ad812.scope - libcontainer container 36f754cb9b5db698c59a27d53264e670aca035da1206d9ca1a2db602511ad812. Feb 13 19:12:54.740688 containerd[1454]: time="2025-02-13T19:12:54.740637149Z" level=info msg="StartContainer for \"36f754cb9b5db698c59a27d53264e670aca035da1206d9ca1a2db602511ad812\" returns successfully" Feb 13 19:12:55.292768 kubelet[1773]: E0213 19:12:55.292718 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:55.688747 kubelet[1773]: I0213 19:12:55.688649 1773 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:12:55.688747 kubelet[1773]: I0213 19:12:55.688681 1773 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:12:56.293663 kubelet[1773]: E0213 19:12:56.293620 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:56.887880 kubelet[1773]: I0213 19:12:56.887799 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-f5f7f" podStartSLOduration=15.052525079 podStartE2EDuration="19.887777567s" podCreationTimestamp="2025-02-13 19:12:37 +0000 UTC" firstStartedPulling="2025-02-13 19:12:49.823189636 +0000 UTC m=+13.319595483" lastFinishedPulling="2025-02-13 19:12:54.658442124 +0000 UTC m=+18.154847971" observedRunningTime="2025-02-13 19:12:55.641374805 +0000 UTC m=+19.137780652" watchObservedRunningTime="2025-02-13 19:12:56.887777567 +0000 UTC m=+20.384183414" Feb 13 19:12:56.888165 kubelet[1773]: I0213 19:12:56.888145 1773 topology_manager.go:215] "Topology Admit Handler" podUID="f0771175-a954-489a-8fb9-8566e99bd54f" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 19:12:56.893744 kubelet[1773]: I0213 19:12:56.893660 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f0771175-a954-489a-8fb9-8566e99bd54f-data\") pod \"nfs-server-provisioner-0\" (UID: \"f0771175-a954-489a-8fb9-8566e99bd54f\") " pod="default/nfs-server-provisioner-0" Feb 13 19:12:56.893744 kubelet[1773]: I0213 19:12:56.893694 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9545r\" (UniqueName: \"kubernetes.io/projected/f0771175-a954-489a-8fb9-8566e99bd54f-kube-api-access-9545r\") pod \"nfs-server-provisioner-0\" (UID: \"f0771175-a954-489a-8fb9-8566e99bd54f\") " pod="default/nfs-server-provisioner-0" Feb 13 19:12:56.896736 systemd[1]: Created slice kubepods-besteffort-podf0771175_a954_489a_8fb9_8566e99bd54f.slice - libcontainer container kubepods-besteffort-podf0771175_a954_489a_8fb9_8566e99bd54f.slice. Feb 13 19:12:56.978855 kubelet[1773]: I0213 19:12:56.978556 1773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:12:57.027935 systemd[1]: run-containerd-runc-k8s.io-f923c4be6e160762f5bd2760ed40a144482b4ca433cefb6816daec55e94a4f29-runc.PBQQDh.mount: Deactivated successfully. Feb 13 19:12:57.204525 containerd[1454]: time="2025-02-13T19:12:57.204414491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f0771175-a954-489a-8fb9-8566e99bd54f,Namespace:default,Attempt:0,}" Feb 13 19:12:57.278103 kubelet[1773]: E0213 19:12:57.278047 1773 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:57.294389 kubelet[1773]: E0213 19:12:57.294357 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:57.327436 systemd-networkd[1382]: cali60e51b789ff: Link UP Feb 13 19:12:57.327596 systemd-networkd[1382]: cali60e51b789ff: Gained carrier Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.248 [INFO][3064] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.88-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default f0771175-a954-489a-8fb9-8566e99bd54f 1031 0 2025-02-13 19:12:56 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.88 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.88-k8s-nfs--server--provisioner--0-" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.248 [INFO][3064] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.88-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.272 [INFO][3076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" HandleID="k8s-pod-network.7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" Workload="10.0.0.88-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.286 [INFO][3076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" HandleID="k8s-pod-network.7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" Workload="10.0.0.88-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003070c0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.88", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:12:57.272193922 +0000 UTC"}, Hostname:"10.0.0.88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.286 [INFO][3076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.286 [INFO][3076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.286 [INFO][3076] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.88' Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.288 [INFO][3076] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" host="10.0.0.88" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.294 [INFO][3076] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.88" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.301 [INFO][3076] ipam/ipam.go 489: Trying affinity for 192.168.125.64/26 host="10.0.0.88" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.304 [INFO][3076] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.64/26 host="10.0.0.88" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.306 [INFO][3076] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.64/26 host="10.0.0.88" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.307 [INFO][3076] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.64/26 handle="k8s-pod-network.7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" host="10.0.0.88" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.308 [INFO][3076] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17 Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.313 [INFO][3076] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.64/26 handle="k8s-pod-network.7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" host="10.0.0.88" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.322 [INFO][3076] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.67/26] block=192.168.125.64/26 handle="k8s-pod-network.7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" host="10.0.0.88" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.322 [INFO][3076] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.67/26] handle="k8s-pod-network.7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" host="10.0.0.88" Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.322 [INFO][3076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:12:57.347467 containerd[1454]: 2025-02-13 19:12:57.322 [INFO][3076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.67/26] IPv6=[] ContainerID="7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" HandleID="k8s-pod-network.7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" Workload="10.0.0.88-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:12:57.348023 containerd[1454]: 2025-02-13 19:12:57.324 [INFO][3064] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.88-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.88-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f0771175-a954-489a-8fb9-8566e99bd54f", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 12, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.88", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.125.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:12:57.348023 containerd[1454]: 2025-02-13 19:12:57.324 [INFO][3064] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.67/32] ContainerID="7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.88-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:12:57.348023 containerd[1454]: 2025-02-13 19:12:57.324 [INFO][3064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.88-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:12:57.348023 containerd[1454]: 2025-02-13 19:12:57.327 [INFO][3064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.88-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:12:57.348153 containerd[1454]: 2025-02-13 19:12:57.327 [INFO][3064] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.88-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.88-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f0771175-a954-489a-8fb9-8566e99bd54f", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 12, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.88", ContainerID:"7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.125.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f2:92:53:8e:0b:78", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:12:57.348153 containerd[1454]: 2025-02-13 19:12:57.342 [INFO][3064] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.88-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:12:57.365333 containerd[1454]: time="2025-02-13T19:12:57.365252731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:12:57.365333 containerd[1454]: time="2025-02-13T19:12:57.365312227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:12:57.365333 containerd[1454]: time="2025-02-13T19:12:57.365327021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:12:57.365493 containerd[1454]: time="2025-02-13T19:12:57.365404109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:12:57.382992 systemd[1]: Started cri-containerd-7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17.scope - libcontainer container 7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17. Feb 13 19:12:57.392487 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:12:57.457568 containerd[1454]: time="2025-02-13T19:12:57.457457574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f0771175-a954-489a-8fb9-8566e99bd54f,Namespace:default,Attempt:0,} returns sandbox id \"7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17\"" Feb 13 19:12:57.472145 containerd[1454]: time="2025-02-13T19:12:57.472095277Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:12:58.294517 kubelet[1773]: E0213 19:12:58.294456 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:59.093297 systemd-networkd[1382]: cali60e51b789ff: Gained IPv6LL Feb 13 19:12:59.294788 kubelet[1773]: E0213 19:12:59.294754 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:12:59.337364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542759483.mount: Deactivated successfully. Feb 13 19:13:00.295561 kubelet[1773]: E0213 19:13:00.295477 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:00.708255 containerd[1454]: time="2025-02-13T19:13:00.708114290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:13:00.709713 containerd[1454]: time="2025-02-13T19:13:00.709657909Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Feb 13 19:13:00.710860 containerd[1454]: time="2025-02-13T19:13:00.710785285Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:13:00.714688 containerd[1454]: time="2025-02-13T19:13:00.714644273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:13:00.715902 containerd[1454]: time="2025-02-13T19:13:00.715862335Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.243676775s" Feb 13 19:13:00.715902 containerd[1454]: time="2025-02-13T19:13:00.715898361Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:13:00.721808 containerd[1454]: time="2025-02-13T19:13:00.721754798Z" level=info msg="CreateContainer within sandbox \"7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:13:00.731524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280658418.mount: Deactivated successfully. Feb 13 19:13:00.734127 containerd[1454]: time="2025-02-13T19:13:00.734075203Z" level=info msg="CreateContainer within sandbox \"7ce8a8a434585ed70d9d610b1570c01a098f12d6e0578746b7a2125b315abe17\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c30c0eb6cdd6f83ac08e004127dfcf59ebc23ff08681d3ecd1c876165bd17dbf\"" Feb 13 19:13:00.734800 containerd[1454]: time="2025-02-13T19:13:00.734775299Z" level=info msg="StartContainer for \"c30c0eb6cdd6f83ac08e004127dfcf59ebc23ff08681d3ecd1c876165bd17dbf\"" Feb 13 19:13:00.768003 systemd[1]: Started cri-containerd-c30c0eb6cdd6f83ac08e004127dfcf59ebc23ff08681d3ecd1c876165bd17dbf.scope - libcontainer container c30c0eb6cdd6f83ac08e004127dfcf59ebc23ff08681d3ecd1c876165bd17dbf. Feb 13 19:13:00.798667 containerd[1454]: time="2025-02-13T19:13:00.798544268Z" level=info msg="StartContainer for \"c30c0eb6cdd6f83ac08e004127dfcf59ebc23ff08681d3ecd1c876165bd17dbf\" returns successfully" Feb 13 19:13:01.295938 kubelet[1773]: E0213 19:13:01.295891 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:01.658792 kubelet[1773]: I0213 19:13:01.656881 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.411695029 podStartE2EDuration="5.656862551s" podCreationTimestamp="2025-02-13 19:12:56 +0000 UTC" firstStartedPulling="2025-02-13 19:12:57.471840702 +0000 UTC m=+20.968246549" lastFinishedPulling="2025-02-13 19:13:00.717008224 +0000 UTC m=+24.213414071" observedRunningTime="2025-02-13 19:13:01.656604205 +0000 UTC m=+25.153010052" watchObservedRunningTime="2025-02-13 19:13:01.656862551 +0000 UTC m=+25.153268398" Feb 13 19:13:02.296604 kubelet[1773]: E0213 19:13:02.296555 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:03.297730 kubelet[1773]: E0213 19:13:03.297655 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:04.298090 kubelet[1773]: E0213 19:13:04.298035 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:05.298783 kubelet[1773]: E0213 19:13:05.298739 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:06.299786 kubelet[1773]: E0213 19:13:06.299735 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:07.300684 kubelet[1773]: E0213 19:13:07.300649 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:08.301263 kubelet[1773]: E0213 19:13:08.301163 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:09.302285 kubelet[1773]: E0213 19:13:09.302230 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:10.303380 kubelet[1773]: E0213 19:13:10.303337 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:10.543628 kubelet[1773]: I0213 19:13:10.543582 1773 topology_manager.go:215] "Topology Admit Handler" podUID="0f177089-6f1d-415a-9a88-87c8f451b144" podNamespace="default" podName="test-pod-1" Feb 13 19:13:10.550347 systemd[1]: Created slice kubepods-besteffort-pod0f177089_6f1d_415a_9a88_87c8f451b144.slice - libcontainer container kubepods-besteffort-pod0f177089_6f1d_415a_9a88_87c8f451b144.slice. Feb 13 19:13:10.662027 kubelet[1773]: I0213 19:13:10.661903 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-03341611-56a1-41ac-bd1b-9fb17a111945\" (UniqueName: \"kubernetes.io/nfs/0f177089-6f1d-415a-9a88-87c8f451b144-pvc-03341611-56a1-41ac-bd1b-9fb17a111945\") pod \"test-pod-1\" (UID: \"0f177089-6f1d-415a-9a88-87c8f451b144\") " pod="default/test-pod-1" Feb 13 19:13:10.662027 kubelet[1773]: I0213 19:13:10.661948 1773 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7v6q\" (UniqueName: \"kubernetes.io/projected/0f177089-6f1d-415a-9a88-87c8f451b144-kube-api-access-d7v6q\") pod \"test-pod-1\" (UID: \"0f177089-6f1d-415a-9a88-87c8f451b144\") " pod="default/test-pod-1" Feb 13 19:13:10.783845 kernel: FS-Cache: Loaded Feb 13 19:13:10.806851 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:13:10.806950 kernel: RPC: Registered udp transport module. Feb 13 19:13:10.806967 kernel: RPC: Registered tcp transport module. Feb 13 19:13:10.808358 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:13:10.808390 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:13:10.972153 kernel: NFS: Registering the id_resolver key type Feb 13 19:13:10.972296 kernel: Key type id_resolver registered Feb 13 19:13:10.972322 kernel: Key type id_legacy registered Feb 13 19:13:10.996780 nfsidmap[3278]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:13:11.000021 nfsidmap[3279]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:13:11.153277 containerd[1454]: time="2025-02-13T19:13:11.153227331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0f177089-6f1d-415a-9a88-87c8f451b144,Namespace:default,Attempt:0,}" Feb 13 19:13:11.301727 systemd-networkd[1382]: cali5ec59c6bf6e: Link UP Feb 13 19:13:11.301919 systemd-networkd[1382]: cali5ec59c6bf6e: Gained carrier Feb 13 19:13:11.304441 kubelet[1773]: E0213 19:13:11.304398 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.238 [INFO][3280] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.88-k8s-test--pod--1-eth0 default 0f177089-6f1d-415a-9a88-87c8f451b144 1101 0 2025-02-13 19:12:57 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.88 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.88-k8s-test--pod--1-" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.238 [INFO][3280] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.88-k8s-test--pod--1-eth0" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.264 [INFO][3294] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" HandleID="k8s-pod-network.a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" Workload="10.0.0.88-k8s-test--pod--1-eth0" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.275 [INFO][3294] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" HandleID="k8s-pod-network.a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" Workload="10.0.0.88-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137a50), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.88", "pod":"test-pod-1", "timestamp":"2025-02-13 19:13:11.264304624 +0000 UTC"}, Hostname:"10.0.0.88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.275 [INFO][3294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.275 [INFO][3294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.275 [INFO][3294] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.88' Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.276 [INFO][3294] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" host="10.0.0.88" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.280 [INFO][3294] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.88" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.284 [INFO][3294] ipam/ipam.go 489: Trying affinity for 192.168.125.64/26 host="10.0.0.88" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.286 [INFO][3294] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.64/26 host="10.0.0.88" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.288 [INFO][3294] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.64/26 host="10.0.0.88" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.288 [INFO][3294] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.64/26 handle="k8s-pod-network.a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" host="10.0.0.88" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.289 [INFO][3294] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578 Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.294 [INFO][3294] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.64/26 handle="k8s-pod-network.a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" host="10.0.0.88" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.298 [INFO][3294] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.68/26] block=192.168.125.64/26 handle="k8s-pod-network.a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" host="10.0.0.88" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.298 [INFO][3294] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.68/26] handle="k8s-pod-network.a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" host="10.0.0.88" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.298 [INFO][3294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.298 [INFO][3294] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.68/26] IPv6=[] ContainerID="a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" HandleID="k8s-pod-network.a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" Workload="10.0.0.88-k8s-test--pod--1-eth0" Feb 13 19:13:11.312396 containerd[1454]: 2025-02-13 19:13:11.300 [INFO][3280] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.88-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.88-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0f177089-6f1d-415a-9a88-87c8f451b144", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 12, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.88", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:13:11.313056 containerd[1454]: 2025-02-13 19:13:11.300 [INFO][3280] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.68/32] ContainerID="a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.88-k8s-test--pod--1-eth0" Feb 13 19:13:11.313056 containerd[1454]: 2025-02-13 19:13:11.300 [INFO][3280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.88-k8s-test--pod--1-eth0" Feb 13 19:13:11.313056 containerd[1454]: 2025-02-13 19:13:11.301 [INFO][3280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.88-k8s-test--pod--1-eth0" Feb 13 19:13:11.313056 containerd[1454]: 2025-02-13 19:13:11.302 [INFO][3280] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.88-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.88-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0f177089-6f1d-415a-9a88-87c8f451b144", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 12, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.88", ContainerID:"a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"72:af:91:45:12:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:13:11.313056 containerd[1454]: 2025-02-13 19:13:11.310 [INFO][3280] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.88-k8s-test--pod--1-eth0" Feb 13 19:13:11.327588 containerd[1454]: time="2025-02-13T19:13:11.327454391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:13:11.327588 containerd[1454]: time="2025-02-13T19:13:11.327527891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:13:11.327730 containerd[1454]: time="2025-02-13T19:13:11.327543207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:13:11.328090 containerd[1454]: time="2025-02-13T19:13:11.328018641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:13:11.345995 systemd[1]: Started cri-containerd-a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578.scope - libcontainer container a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578. Feb 13 19:13:11.356701 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:13:11.373556 containerd[1454]: time="2025-02-13T19:13:11.373518330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0f177089-6f1d-415a-9a88-87c8f451b144,Namespace:default,Attempt:0,} returns sandbox id \"a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578\"" Feb 13 19:13:11.374863 containerd[1454]: time="2025-02-13T19:13:11.374840660Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:13:11.672950 containerd[1454]: time="2025-02-13T19:13:11.672845402Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:13:11.675028 containerd[1454]: time="2025-02-13T19:13:11.674404669Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:13:11.677357 containerd[1454]: time="2025-02-13T19:13:11.677310418Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 302.438247ms" Feb 13 19:13:11.677357 containerd[1454]: time="2025-02-13T19:13:11.677341330Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:13:11.678981 containerd[1454]: time="2025-02-13T19:13:11.678939306Z" level=info msg="CreateContainer within sandbox \"a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:13:11.689197 containerd[1454]: time="2025-02-13T19:13:11.689163793Z" level=info msg="CreateContainer within sandbox \"a29de15a817bcd6bf4277be0db80807686e195fb63774f05015effcc47277578\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"4e7a57d28d32ccaba95c4abdcf2f222d3e0b7084e563011297e54d81bc871572\"" Feb 13 19:13:11.689880 containerd[1454]: time="2025-02-13T19:13:11.689852970Z" level=info msg="StartContainer for \"4e7a57d28d32ccaba95c4abdcf2f222d3e0b7084e563011297e54d81bc871572\"" Feb 13 19:13:11.719993 systemd[1]: Started cri-containerd-4e7a57d28d32ccaba95c4abdcf2f222d3e0b7084e563011297e54d81bc871572.scope - libcontainer container 4e7a57d28d32ccaba95c4abdcf2f222d3e0b7084e563011297e54d81bc871572. Feb 13 19:13:11.740786 containerd[1454]: time="2025-02-13T19:13:11.740747469Z" level=info msg="StartContainer for \"4e7a57d28d32ccaba95c4abdcf2f222d3e0b7084e563011297e54d81bc871572\" returns successfully" Feb 13 19:13:12.305220 kubelet[1773]: E0213 19:13:12.305166 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:12.683437 kubelet[1773]: I0213 19:13:12.683305 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.380075154 podStartE2EDuration="15.683291074s" podCreationTimestamp="2025-02-13 19:12:57 +0000 UTC" firstStartedPulling="2025-02-13 19:13:11.374595125 +0000 UTC m=+34.871000972" lastFinishedPulling="2025-02-13 19:13:11.677811045 +0000 UTC m=+35.174216892" observedRunningTime="2025-02-13 19:13:12.68236963 +0000 UTC m=+36.178775437" watchObservedRunningTime="2025-02-13 19:13:12.683291074 +0000 UTC m=+36.179696881" Feb 13 19:13:12.725094 systemd-networkd[1382]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:13:13.047059 update_engine[1443]: I20250213 19:13:13.046886 1443 update_attempter.cc:509] Updating boot flags... Feb 13 19:13:13.069855 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3258) Feb 13 19:13:13.104852 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3268) Feb 13 19:13:13.126958 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3268) Feb 13 19:13:13.305531 kubelet[1773]: E0213 19:13:13.305269 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:13:14.305921 kubelet[1773]: E0213 19:13:14.305866 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"