Feb 13 19:40:37.905117 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:40:37.905140 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:40:37.905150 kernel: KASLR enabled Feb 13 19:40:37.905156 kernel: efi: EFI v2.7 by EDK II Feb 13 19:40:37.905162 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 19:40:37.905168 kernel: random: crng init done Feb 13 19:40:37.905176 kernel: ACPI: Early table checksum verification disabled Feb 13 19:40:37.905182 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 19:40:37.905188 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:40:37.905196 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:37.905202 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:37.905208 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:37.905214 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:37.905220 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:37.905228 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:37.905236 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:37.905242 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:37.905249 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:37.905255 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:40:37.905262 kernel: NUMA: Failed to initialise from firmware Feb 13 19:40:37.905268 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:40:37.905275 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 19:40:37.905281 kernel: Zone ranges: Feb 13 19:40:37.905287 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:40:37.905294 kernel: DMA32 empty Feb 13 19:40:37.905302 kernel: Normal empty Feb 13 19:40:37.905308 kernel: Movable zone start for each node Feb 13 19:40:37.905404 kernel: Early memory node ranges Feb 13 19:40:37.905412 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:40:37.905419 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:40:37.905425 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:40:37.905432 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:40:37.905438 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:40:37.905445 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:40:37.905451 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:40:37.905458 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:40:37.905464 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:40:37.905473 kernel: psci: probing for conduit method from ACPI. Feb 13 19:40:37.905480 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:40:37.905487 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:40:37.905496 kernel: psci: Trusted OS migration not required Feb 13 19:40:37.905503 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:40:37.905510 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:40:37.905519 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:40:37.905526 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:40:37.905533 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:40:37.905540 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:40:37.905547 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:40:37.905554 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:40:37.905560 kernel: CPU features: detected: Spectre-v4 Feb 13 19:40:37.905567 kernel: CPU features: detected: Spectre-BHB Feb 13 19:40:37.905574 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:40:37.905581 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:40:37.905589 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:40:37.905597 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:40:37.905603 kernel: alternatives: applying boot alternatives Feb 13 19:40:37.905612 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:40:37.905619 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:40:37.905626 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:40:37.905633 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:40:37.905639 kernel: Fallback order for Node 0: 0 Feb 13 19:40:37.905646 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:40:37.905653 kernel: Policy zone: DMA Feb 13 19:40:37.905660 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:40:37.905668 kernel: software IO TLB: area num 4. Feb 13 19:40:37.905675 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:40:37.905683 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Feb 13 19:40:37.905690 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:40:37.905697 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:40:37.905704 kernel: rcu: RCU event tracing is enabled. Feb 13 19:40:37.905711 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:40:37.905718 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:40:37.905725 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:40:37.905732 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:40:37.905739 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:40:37.905746 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:40:37.905754 kernel: GICv3: 256 SPIs implemented Feb 13 19:40:37.905761 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:40:37.905768 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:40:37.905775 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:40:37.905782 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:40:37.905796 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:40:37.905803 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:40:37.905811 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:40:37.905817 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:40:37.905825 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:40:37.905832 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:40:37.905841 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:40:37.905848 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:40:37.905855 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:40:37.905862 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:40:37.905869 kernel: arm-pv: using stolen time PV Feb 13 19:40:37.905876 kernel: Console: colour dummy device 80x25 Feb 13 19:40:37.905883 kernel: ACPI: Core revision 20230628 Feb 13 19:40:37.905891 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:40:37.905898 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:40:37.905905 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:40:37.905914 kernel: landlock: Up and running. Feb 13 19:40:37.905921 kernel: SELinux: Initializing. Feb 13 19:40:37.905928 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:40:37.905935 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:40:37.905943 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:40:37.905951 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:40:37.905958 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:40:37.905966 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:40:37.905973 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:40:37.905981 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:40:37.905988 kernel: Remapping and enabling EFI services. Feb 13 19:40:37.905995 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:40:37.906002 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:40:37.906009 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:40:37.906017 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:40:37.906024 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:40:37.906031 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:40:37.906038 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:40:37.906045 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:40:37.906054 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:40:37.906061 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:40:37.906073 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:40:37.906082 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:40:37.906090 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:40:37.906098 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:40:37.906105 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:40:37.906112 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:40:37.906120 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:40:37.906129 kernel: SMP: Total of 4 processors activated. Feb 13 19:40:37.906136 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:40:37.906144 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:40:37.906151 kernel: CPU features: detected: Common not Private translations Feb 13 19:40:37.906159 kernel: CPU features: detected: CRC32 instructions Feb 13 19:40:37.906166 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:40:37.906174 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:40:37.906181 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:40:37.906190 kernel: CPU features: detected: Privileged Access Never Feb 13 19:40:37.906198 kernel: CPU features: detected: RAS Extension Support Feb 13 19:40:37.906205 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:40:37.906213 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:40:37.906220 kernel: alternatives: applying system-wide alternatives Feb 13 19:40:37.906228 kernel: devtmpfs: initialized Feb 13 19:40:37.906235 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:40:37.906243 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:40:37.906250 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:40:37.906260 kernel: SMBIOS 3.0.0 present. Feb 13 19:40:37.906267 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 19:40:37.906275 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:40:37.906282 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:40:37.906290 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:40:37.906298 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:40:37.906305 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:40:37.906319 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Feb 13 19:40:37.906337 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:40:37.906347 kernel: cpuidle: using governor menu Feb 13 19:40:37.906355 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:40:37.906362 kernel: ASID allocator initialised with 32768 entries Feb 13 19:40:37.906370 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:40:37.906377 kernel: Serial: AMBA PL011 UART driver Feb 13 19:40:37.906385 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:40:37.906392 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:40:37.906399 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:40:37.906411 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:40:37.906420 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:40:37.906427 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:40:37.906435 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:40:37.906442 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:40:37.906450 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:40:37.906457 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:40:37.906465 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:40:37.906473 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:40:37.906480 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:40:37.906489 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:40:37.906497 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:40:37.906504 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:40:37.906512 kernel: ACPI: Interpreter enabled Feb 13 19:40:37.906519 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:40:37.906526 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:40:37.906534 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:40:37.906542 kernel: printk: console [ttyAMA0] enabled Feb 13 19:40:37.906549 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:40:37.906712 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:40:37.906796 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:40:37.906870 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:40:37.906937 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:40:37.907002 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:40:37.907013 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:40:37.907020 kernel: PCI host bridge to bus 0000:00 Feb 13 19:40:37.907112 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:40:37.907175 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:40:37.907237 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:40:37.907298 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:40:37.907435 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:40:37.907525 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:40:37.907621 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:40:37.907689 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:40:37.907757 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:40:37.907835 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:40:37.907904 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:40:37.907974 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:40:37.908039 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:40:37.908102 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:40:37.908165 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:40:37.908175 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:40:37.908183 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:40:37.908196 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:40:37.908204 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:40:37.908212 kernel: iommu: Default domain type: Translated Feb 13 19:40:37.908219 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:40:37.908227 kernel: efivars: Registered efivars operations Feb 13 19:40:37.908236 kernel: vgaarb: loaded Feb 13 19:40:37.908244 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:40:37.908251 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:40:37.908259 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:40:37.908266 kernel: pnp: PnP ACPI init Feb 13 19:40:37.908358 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:40:37.908370 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:40:37.908378 kernel: NET: Registered PF_INET protocol family Feb 13 19:40:37.908388 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:40:37.908396 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:40:37.908404 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:40:37.908412 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:40:37.908419 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:40:37.908427 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:40:37.908435 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:40:37.908442 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:40:37.908450 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:40:37.908459 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:40:37.908466 kernel: kvm [1]: HYP mode not available Feb 13 19:40:37.908474 kernel: Initialise system trusted keyrings Feb 13 19:40:37.908481 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:40:37.908488 kernel: Key type asymmetric registered Feb 13 19:40:37.908496 kernel: Asymmetric key parser 'x509' registered Feb 13 19:40:37.908503 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:40:37.908511 kernel: io scheduler mq-deadline registered Feb 13 19:40:37.908518 kernel: io scheduler kyber registered Feb 13 19:40:37.908527 kernel: io scheduler bfq registered Feb 13 19:40:37.908535 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:40:37.908543 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:40:37.908551 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:40:37.908621 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:40:37.908632 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:40:37.908639 kernel: thunder_xcv, ver 1.0 Feb 13 19:40:37.908647 kernel: thunder_bgx, ver 1.0 Feb 13 19:40:37.908654 kernel: nicpf, ver 1.0 Feb 13 19:40:37.908664 kernel: nicvf, ver 1.0 Feb 13 19:40:37.908749 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:40:37.908842 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:40:37 UTC (1739475637) Feb 13 19:40:37.908853 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:40:37.908860 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:40:37.908868 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:40:37.908876 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:40:37.908884 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:40:37.908894 kernel: Segment Routing with IPv6 Feb 13 19:40:37.908902 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:40:37.908909 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:40:37.908916 kernel: Key type dns_resolver registered Feb 13 19:40:37.908924 kernel: registered taskstats version 1 Feb 13 19:40:37.908932 kernel: Loading compiled-in X.509 certificates Feb 13 19:40:37.908940 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:40:37.908947 kernel: Key type .fscrypt registered Feb 13 19:40:37.908955 kernel: Key type fscrypt-provisioning registered Feb 13 19:40:37.908963 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:40:37.908971 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:40:37.908978 kernel: ima: No architecture policies found Feb 13 19:40:37.908986 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:40:37.908994 kernel: clk: Disabling unused clocks Feb 13 19:40:37.909001 kernel: Freeing unused kernel memory: 39360K Feb 13 19:40:37.909008 kernel: Run /init as init process Feb 13 19:40:37.909016 kernel: with arguments: Feb 13 19:40:37.909023 kernel: /init Feb 13 19:40:37.909032 kernel: with environment: Feb 13 19:40:37.909039 kernel: HOME=/ Feb 13 19:40:37.909047 kernel: TERM=linux Feb 13 19:40:37.909054 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:40:37.909064 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:40:37.909073 systemd[1]: Detected virtualization kvm. Feb 13 19:40:37.909082 systemd[1]: Detected architecture arm64. Feb 13 19:40:37.909091 systemd[1]: Running in initrd. Feb 13 19:40:37.909099 systemd[1]: No hostname configured, using default hostname. Feb 13 19:40:37.909107 systemd[1]: Hostname set to . Feb 13 19:40:37.909115 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:40:37.909123 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:40:37.909131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:40:37.909139 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:40:37.909148 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:40:37.909158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:40:37.909166 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:40:37.909175 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:40:37.909184 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:40:37.909192 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:40:37.909200 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:40:37.909209 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:40:37.909219 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:40:37.909227 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:40:37.909235 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:40:37.909243 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:40:37.909251 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:40:37.909259 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:40:37.909267 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:40:37.909275 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:40:37.909283 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:40:37.909293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:40:37.909301 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:40:37.909309 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:40:37.909327 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:40:37.909336 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:40:37.909344 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:40:37.909352 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:40:37.909360 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:40:37.909370 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:40:37.909378 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:40:37.909386 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:40:37.909394 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:40:37.909402 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:40:37.909411 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:40:37.909421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:40:37.909429 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:40:37.909459 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 19:40:37.909481 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:40:37.909490 systemd-journald[238]: Journal started Feb 13 19:40:37.909508 systemd-journald[238]: Runtime Journal (/run/log/journal/ba107343affe4f39bc67d29db90311b1) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:40:37.897682 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 19:40:37.912342 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:40:37.912375 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:40:37.914110 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 19:40:37.915336 kernel: Bridge firewalling registered Feb 13 19:40:37.915366 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:40:37.917148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:40:37.922396 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:40:37.924171 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:40:37.926350 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:40:37.927482 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:40:37.931713 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:40:37.936735 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:40:37.940330 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:40:37.946885 dracut-cmdline[274]: dracut-dracut-053 Feb 13 19:40:37.949711 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:40:37.948466 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:40:37.974411 systemd-resolved[283]: Positive Trust Anchors: Feb 13 19:40:37.974429 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:40:37.974460 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:40:37.979246 systemd-resolved[283]: Defaulting to hostname 'linux'. Feb 13 19:40:37.980336 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:40:37.981775 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:40:38.021351 kernel: SCSI subsystem initialized Feb 13 19:40:38.023329 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:40:38.031353 kernel: iscsi: registered transport (tcp) Feb 13 19:40:38.045344 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:40:38.045387 kernel: QLogic iSCSI HBA Driver Feb 13 19:40:38.087191 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:40:38.103483 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:40:38.118537 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:40:38.119540 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:40:38.119552 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:40:38.166361 kernel: raid6: neonx8 gen() 15788 MB/s Feb 13 19:40:38.183335 kernel: raid6: neonx4 gen() 15650 MB/s Feb 13 19:40:38.200343 kernel: raid6: neonx2 gen() 13239 MB/s Feb 13 19:40:38.217333 kernel: raid6: neonx1 gen() 10483 MB/s Feb 13 19:40:38.234356 kernel: raid6: int64x8 gen() 6949 MB/s Feb 13 19:40:38.251335 kernel: raid6: int64x4 gen() 7337 MB/s Feb 13 19:40:38.268330 kernel: raid6: int64x2 gen() 6130 MB/s Feb 13 19:40:38.285340 kernel: raid6: int64x1 gen() 5047 MB/s Feb 13 19:40:38.285361 kernel: raid6: using algorithm neonx8 gen() 15788 MB/s Feb 13 19:40:38.302348 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Feb 13 19:40:38.302376 kernel: raid6: using neon recovery algorithm Feb 13 19:40:38.307343 kernel: xor: measuring software checksum speed Feb 13 19:40:38.307364 kernel: 8regs : 19141 MB/sec Feb 13 19:40:38.308732 kernel: 32regs : 18154 MB/sec Feb 13 19:40:38.308747 kernel: arm64_neon : 26399 MB/sec Feb 13 19:40:38.308756 kernel: xor: using function: arm64_neon (26399 MB/sec) Feb 13 19:40:38.357352 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:40:38.368489 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:40:38.386466 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:40:38.397639 systemd-udevd[462]: Using default interface naming scheme 'v255'. Feb 13 19:40:38.400812 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:40:38.403140 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:40:38.417808 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Feb 13 19:40:38.444183 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:40:38.452480 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:40:38.493813 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:40:38.506462 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:40:38.517024 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:40:38.518186 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:40:38.521164 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:40:38.523391 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:40:38.533019 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:40:38.537892 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:40:38.549732 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:40:38.549868 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:40:38.549889 kernel: GPT:9289727 != 19775487 Feb 13 19:40:38.549899 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:40:38.549909 kernel: GPT:9289727 != 19775487 Feb 13 19:40:38.549920 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:40:38.549930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:40:38.544794 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:40:38.548880 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:40:38.548940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:40:38.550941 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:40:38.552026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:40:38.552082 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:40:38.553487 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:40:38.563466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:40:38.570335 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (522) Feb 13 19:40:38.575354 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (515) Feb 13 19:40:38.576391 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:40:38.579354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:40:38.590404 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:40:38.595053 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:40:38.598943 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:40:38.599885 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:40:38.617466 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:40:38.619533 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:40:38.624399 disk-uuid[553]: Primary Header is updated. Feb 13 19:40:38.624399 disk-uuid[553]: Secondary Entries is updated. Feb 13 19:40:38.624399 disk-uuid[553]: Secondary Header is updated. Feb 13 19:40:38.632349 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:40:38.637355 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:40:38.646129 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:40:39.642339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:40:39.644459 disk-uuid[554]: The operation has completed successfully. Feb 13 19:40:39.667881 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:40:39.668001 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:40:39.687488 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:40:39.690333 sh[576]: Success Feb 13 19:40:39.705363 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:40:39.735360 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:40:39.750892 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:40:39.752428 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:40:39.764390 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:40:39.764439 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:40:39.764450 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:40:39.765759 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:40:39.765774 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:40:39.770034 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:40:39.771238 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:40:39.777465 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:40:39.778859 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:40:39.785931 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:40:39.785982 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:40:39.785993 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:40:39.788334 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:40:39.796030 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:40:39.797657 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:40:39.803135 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:40:39.810557 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:40:39.876373 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:40:39.886489 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:40:39.909458 systemd-networkd[767]: lo: Link UP Feb 13 19:40:39.910139 systemd-networkd[767]: lo: Gained carrier Feb 13 19:40:39.910886 systemd-networkd[767]: Enumeration completed Feb 13 19:40:39.912441 ignition[665]: Ignition 2.19.0 Feb 13 19:40:39.911193 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:40:39.912448 ignition[665]: Stage: fetch-offline Feb 13 19:40:39.912130 systemd[1]: Reached target network.target - Network. Feb 13 19:40:39.912482 ignition[665]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:39.912671 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:40:39.912489 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:39.912674 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:40:39.912666 ignition[665]: parsed url from cmdline: "" Feb 13 19:40:39.913514 systemd-networkd[767]: eth0: Link UP Feb 13 19:40:39.912670 ignition[665]: no config URL provided Feb 13 19:40:39.913518 systemd-networkd[767]: eth0: Gained carrier Feb 13 19:40:39.912674 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:40:39.913525 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:40:39.912681 ignition[665]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:40:39.912704 ignition[665]: op(1): [started] loading QEMU firmware config module Feb 13 19:40:39.912709 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:40:39.922470 ignition[665]: op(1): [finished] loading QEMU firmware config module Feb 13 19:40:39.932179 ignition[665]: parsing config with SHA512: cdfd1892eead5f07f0116278214cbdc4d35c37389501b7ed7a544b1832b79e5f70bd6cea23f8cca8eff75b82ed4d02d90265040999872444c656765e9ee970fb Feb 13 19:40:39.935153 unknown[665]: fetched base config from "system" Feb 13 19:40:39.935163 unknown[665]: fetched user config from "qemu" Feb 13 19:40:39.935449 ignition[665]: fetch-offline: fetch-offline passed Feb 13 19:40:39.935368 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:40:39.935515 ignition[665]: Ignition finished successfully Feb 13 19:40:39.937431 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:40:39.938980 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:40:39.946475 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:40:39.956895 ignition[774]: Ignition 2.19.0 Feb 13 19:40:39.956906 ignition[774]: Stage: kargs Feb 13 19:40:39.957085 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:39.957094 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:39.957823 ignition[774]: kargs: kargs passed Feb 13 19:40:39.957869 ignition[774]: Ignition finished successfully Feb 13 19:40:39.960201 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:40:39.970494 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:40:39.980260 ignition[782]: Ignition 2.19.0 Feb 13 19:40:39.980270 ignition[782]: Stage: disks Feb 13 19:40:39.980465 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:39.982802 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:40:39.980476 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:39.983925 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:40:39.981194 ignition[782]: disks: disks passed Feb 13 19:40:39.985163 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:40:39.981242 ignition[782]: Ignition finished successfully Feb 13 19:40:39.986065 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:40:39.986801 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:40:39.988132 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:40:39.990414 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:40:40.004235 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:40:40.133082 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:40:40.140462 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:40:40.203148 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:40:40.204336 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:40:40.204246 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:40:40.224400 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:40:40.226003 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:40:40.227033 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:40:40.227075 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:40:40.227098 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:40:40.232267 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:40:40.236820 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Feb 13 19:40:40.236842 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:40:40.236853 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:40:40.236862 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:40:40.236872 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:40:40.234834 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:40:40.239755 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:40:40.276440 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:40:40.280607 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:40:40.283841 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:40:40.287798 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:40:40.354582 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:40:40.361437 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:40:40.362760 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:40:40.367449 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:40:40.382695 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:40:40.384358 ignition[913]: INFO : Ignition 2.19.0 Feb 13 19:40:40.384358 ignition[913]: INFO : Stage: mount Feb 13 19:40:40.384358 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:40.384358 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:40.387666 ignition[913]: INFO : mount: mount passed Feb 13 19:40:40.387666 ignition[913]: INFO : Ignition finished successfully Feb 13 19:40:40.385749 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:40:40.396403 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:40:40.763582 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:40:40.780552 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:40:40.788353 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Feb 13 19:40:40.790027 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:40:40.790045 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:40:40.790573 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:40:40.794727 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:40:40.794034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:40:40.820892 ignition[944]: INFO : Ignition 2.19.0 Feb 13 19:40:40.820892 ignition[944]: INFO : Stage: files Feb 13 19:40:40.822229 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:40.822229 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:40.822229 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:40:40.825086 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:40:40.825086 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:40:40.825086 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:40:40.825086 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:40:40.825086 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:40:40.824806 unknown[944]: wrote ssh authorized keys file for user: core Feb 13 19:40:40.831033 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:40:40.831033 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:40:40.831033 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:40:40.831033 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:40:40.831033 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:40:40.831033 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:40:40.831033 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:40:40.831033 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:40:41.195145 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:40:41.311745 systemd-networkd[767]: eth0: Gained IPv6LL Feb 13 19:40:41.419425 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:40:41.419425 ignition[944]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:40:41.422343 ignition[944]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:40:41.422343 ignition[944]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:40:41.422343 ignition[944]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:40:41.422343 ignition[944]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:40:41.442659 ignition[944]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:40:41.446757 ignition[944]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:40:41.448896 ignition[944]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:40:41.448896 ignition[944]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:40:41.448896 ignition[944]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:40:41.448896 ignition[944]: INFO : files: files passed Feb 13 19:40:41.448896 ignition[944]: INFO : Ignition finished successfully Feb 13 19:40:41.449945 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:40:41.458902 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:40:41.462391 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:40:41.464656 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:40:41.465484 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:40:41.469342 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:40:41.471253 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:40:41.471253 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:40:41.473774 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:40:41.477583 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:40:41.478672 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:40:41.493517 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:40:41.515525 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:40:41.517369 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:40:41.518731 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:40:41.520075 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:40:41.521402 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:40:41.524801 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:40:41.537612 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:40:41.548483 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:40:41.555917 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:40:41.556870 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:40:41.558332 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:40:41.559642 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:40:41.559759 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:40:41.561599 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:40:41.563055 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:40:41.564269 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:40:41.565535 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:40:41.566960 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:40:41.568387 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:40:41.569767 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:40:41.571250 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:40:41.572119 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:40:41.572956 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:40:41.573627 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:40:41.573740 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:40:41.575360 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:40:41.576802 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:40:41.578193 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:40:41.581402 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:40:41.582712 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:40:41.582832 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:40:41.584931 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:40:41.585047 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:40:41.586454 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:40:41.587611 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:40:41.593411 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:40:41.594381 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:40:41.595972 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:40:41.597104 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:40:41.597193 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:40:41.598310 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:40:41.598409 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:40:41.599529 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:40:41.599636 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:40:41.600930 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:40:41.601031 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:40:41.613483 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:40:41.617954 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:40:41.618094 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:40:41.621435 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:40:41.622090 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:40:41.622207 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:40:41.623549 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:40:41.623645 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:40:41.631511 ignition[998]: INFO : Ignition 2.19.0 Feb 13 19:40:41.631511 ignition[998]: INFO : Stage: umount Feb 13 19:40:41.637205 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:41.637205 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:41.637205 ignition[998]: INFO : umount: umount passed Feb 13 19:40:41.637205 ignition[998]: INFO : Ignition finished successfully Feb 13 19:40:41.631549 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:40:41.631633 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:40:41.637036 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:40:41.638229 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:40:41.641154 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:40:41.641559 systemd[1]: Stopped target network.target - Network. Feb 13 19:40:41.642242 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:40:41.642297 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:40:41.644910 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:40:41.644969 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:40:41.646448 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:40:41.646494 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:40:41.647706 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:40:41.647744 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:40:41.649129 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:40:41.651173 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:40:41.657355 systemd-networkd[767]: eth0: DHCPv6 lease lost Feb 13 19:40:41.658927 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:40:41.659049 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:40:41.660265 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:40:41.660298 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:40:41.673417 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:40:41.675039 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:40:41.675099 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:40:41.676123 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:40:41.678648 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:40:41.678866 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:40:41.682304 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:40:41.682675 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:40:41.683998 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:40:41.684048 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:40:41.686109 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:40:41.686159 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:40:41.688104 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:40:41.689580 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:40:41.691178 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:40:41.691277 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:40:41.692943 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:40:41.693039 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:40:41.694799 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:40:41.694934 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:40:41.696852 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:40:41.696918 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:40:41.699509 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:40:41.699551 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:40:41.700956 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:40:41.701003 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:40:41.703154 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:40:41.703250 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:40:41.705341 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:40:41.705393 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:40:41.719475 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:40:41.720251 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:40:41.720304 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:40:41.721930 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:40:41.721974 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:40:41.726709 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:40:41.727564 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:40:41.729401 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:40:41.733025 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:40:41.742270 systemd[1]: Switching root. Feb 13 19:40:41.768306 systemd-journald[238]: Journal stopped Feb 13 19:40:42.390520 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 19:40:42.390586 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:40:42.390600 kernel: SELinux: policy capability open_perms=1 Feb 13 19:40:42.390610 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:40:42.390620 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:40:42.390634 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:40:42.390648 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:40:42.390658 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:40:42.390668 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:40:42.390677 kernel: audit: type=1403 audit(1739475641.871:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:40:42.390688 systemd[1]: Successfully loaded SELinux policy in 32.237ms. Feb 13 19:40:42.390702 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.381ms. Feb 13 19:40:42.390713 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:40:42.390725 systemd[1]: Detected virtualization kvm. Feb 13 19:40:42.390739 systemd[1]: Detected architecture arm64. Feb 13 19:40:42.390752 systemd[1]: Detected first boot. Feb 13 19:40:42.390762 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:40:42.390782 zram_generator::config[1044]: No configuration found. Feb 13 19:40:42.390796 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:40:42.390807 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:40:42.390817 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:40:42.390831 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:40:42.390842 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:40:42.390854 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:40:42.390865 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:40:42.390875 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:40:42.390886 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:40:42.390897 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:40:42.390909 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:40:42.390920 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:40:42.390931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:40:42.390941 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:40:42.390952 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:40:42.390963 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:40:42.390974 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:40:42.390985 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:40:42.390995 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:40:42.391008 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:40:42.391019 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:40:42.391029 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:40:42.391041 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:40:42.391051 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:40:42.391062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:40:42.391073 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:40:42.391084 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:40:42.391096 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:40:42.391107 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:40:42.391117 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:40:42.391128 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:40:42.391138 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:40:42.391149 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:40:42.391160 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:40:42.391170 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:40:42.391181 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:40:42.391193 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:40:42.391203 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:40:42.391214 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:40:42.391225 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:40:42.391236 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:40:42.391246 systemd[1]: Reached target machines.target - Containers. Feb 13 19:40:42.391257 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:40:42.391268 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:40:42.391280 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:40:42.391291 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:40:42.391302 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:40:42.391370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:40:42.391385 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:40:42.391396 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:40:42.391407 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:40:42.391417 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:40:42.391428 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:40:42.391442 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:40:42.391453 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:40:42.391464 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:40:42.391474 kernel: fuse: init (API version 7.39) Feb 13 19:40:42.391484 kernel: loop: module loaded Feb 13 19:40:42.391494 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:40:42.391505 kernel: ACPI: bus type drm_connector registered Feb 13 19:40:42.391515 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:40:42.391526 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:40:42.391538 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:40:42.391549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:40:42.391559 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:40:42.391570 systemd[1]: Stopped verity-setup.service. Feb 13 19:40:42.391580 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:40:42.391590 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:40:42.391601 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:40:42.391614 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:40:42.391646 systemd-journald[1108]: Collecting audit messages is disabled. Feb 13 19:40:42.391668 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:40:42.391679 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:40:42.391690 systemd-journald[1108]: Journal started Feb 13 19:40:42.391716 systemd-journald[1108]: Runtime Journal (/run/log/journal/ba107343affe4f39bc67d29db90311b1) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:40:42.210061 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:40:42.229264 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:40:42.229602 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:40:42.394562 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:40:42.396369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:40:42.397553 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:40:42.398672 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:40:42.398828 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:40:42.400119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:40:42.400273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:40:42.401421 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:40:42.401549 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:40:42.402592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:40:42.402734 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:40:42.403926 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:40:42.404057 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:40:42.405762 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:40:42.405915 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:40:42.407014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:40:42.408158 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:40:42.409597 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:40:42.421443 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:40:42.429428 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:40:42.431292 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:40:42.432145 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:40:42.432184 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:40:42.433871 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:40:42.435817 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:40:42.437677 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:40:42.438554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:40:42.441561 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:40:42.443405 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:40:42.444338 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:40:42.448538 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:40:42.449516 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:40:42.451238 systemd-journald[1108]: Time spent on flushing to /var/log/journal/ba107343affe4f39bc67d29db90311b1 is 17.063ms for 836 entries. Feb 13 19:40:42.451238 systemd-journald[1108]: System Journal (/var/log/journal/ba107343affe4f39bc67d29db90311b1) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:40:42.479799 systemd-journald[1108]: Received client request to flush runtime journal. Feb 13 19:40:42.479861 kernel: loop0: detected capacity change from 0 to 194096 Feb 13 19:40:42.479876 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:40:42.452617 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:40:42.456563 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:40:42.459492 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:40:42.462925 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:40:42.464629 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:40:42.465737 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:40:42.466833 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:40:42.471786 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:40:42.476149 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:40:42.485599 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:40:42.490924 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:40:42.494386 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:40:42.503249 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:40:42.507377 kernel: loop1: detected capacity change from 0 to 114328 Feb 13 19:40:42.509764 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:40:42.510588 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:40:42.511910 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:40:42.512996 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:40:42.523579 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:40:42.542474 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 19:40:42.542494 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 19:40:42.546998 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:40:42.553300 kernel: loop2: detected capacity change from 0 to 114432 Feb 13 19:40:42.593354 kernel: loop3: detected capacity change from 0 to 194096 Feb 13 19:40:42.599337 kernel: loop4: detected capacity change from 0 to 114328 Feb 13 19:40:42.603338 kernel: loop5: detected capacity change from 0 to 114432 Feb 13 19:40:42.606136 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:40:42.606556 (sd-merge)[1180]: Merged extensions into '/usr'. Feb 13 19:40:42.610603 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:40:42.610618 systemd[1]: Reloading... Feb 13 19:40:42.658364 zram_generator::config[1202]: No configuration found. Feb 13 19:40:42.720383 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:40:42.768427 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:40:42.804484 systemd[1]: Reloading finished in 193 ms. Feb 13 19:40:42.836626 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:40:42.837792 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:40:42.842477 systemd[1]: Starting ensure-sysext.service... Feb 13 19:40:42.844276 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:40:42.854919 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:40:42.854937 systemd[1]: Reloading... Feb 13 19:40:42.874127 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:40:42.874422 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:40:42.875055 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:40:42.875268 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 19:40:42.875358 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 19:40:42.877461 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:40:42.877474 systemd-tmpfiles[1241]: Skipping /boot Feb 13 19:40:42.884299 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:40:42.884333 systemd-tmpfiles[1241]: Skipping /boot Feb 13 19:40:42.907340 zram_generator::config[1268]: No configuration found. Feb 13 19:40:42.991802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:40:43.027190 systemd[1]: Reloading finished in 171 ms. Feb 13 19:40:43.041252 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:40:43.055356 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:40:43.062395 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:40:43.064729 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:40:43.066987 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:40:43.071562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:40:43.080400 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:40:43.085632 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:40:43.089745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:40:43.093579 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:40:43.095577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:40:43.102198 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:40:43.103106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:40:43.104924 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:40:43.106486 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:40:43.109848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:40:43.110000 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:40:43.111424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:40:43.111556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:40:43.113547 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:40:43.113674 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:40:43.118123 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Feb 13 19:40:43.122578 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:40:43.132930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:40:43.137573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:40:43.142632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:40:43.143565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:40:43.149092 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:40:43.150587 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:40:43.151815 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:40:43.153271 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:40:43.157357 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:40:43.158889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:40:43.159018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:40:43.160518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:40:43.160764 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:40:43.162235 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:40:43.162559 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:40:43.168285 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:40:43.178591 systemd[1]: Finished ensure-sysext.service. Feb 13 19:40:43.185423 augenrules[1362]: No rules Feb 13 19:40:43.188269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:40:43.205529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:40:43.207577 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:40:43.209309 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:40:43.211116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:40:43.212435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:40:43.215741 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:40:43.221557 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:40:43.222500 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:40:43.224931 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:40:43.226484 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:40:43.226611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:40:43.228061 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:40:43.228263 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:40:43.229671 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:40:43.229824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:40:43.231525 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:40:43.231648 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:40:43.233345 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1343) Feb 13 19:40:43.236276 systemd-resolved[1308]: Positive Trust Anchors: Feb 13 19:40:43.238693 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:40:43.238737 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:40:43.240249 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:40:43.244049 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:40:43.244649 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:40:43.246941 systemd-resolved[1308]: Defaulting to hostname 'linux'. Feb 13 19:40:43.256149 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:40:43.259299 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:40:43.266374 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:40:43.276735 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:40:43.295983 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:40:43.303302 systemd-networkd[1380]: lo: Link UP Feb 13 19:40:43.303309 systemd-networkd[1380]: lo: Gained carrier Feb 13 19:40:43.304627 systemd-networkd[1380]: Enumeration completed Feb 13 19:40:43.304803 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:40:43.306267 systemd[1]: Reached target network.target - Network. Feb 13 19:40:43.310538 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:40:43.310550 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:40:43.311630 systemd-networkd[1380]: eth0: Link UP Feb 13 19:40:43.311640 systemd-networkd[1380]: eth0: Gained carrier Feb 13 19:40:43.311656 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:40:43.314549 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:40:43.315501 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:40:43.316409 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:40:43.331409 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:40:43.332042 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Feb 13 19:40:43.332700 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:40:43.332750 systemd-timesyncd[1381]: Initial clock synchronization to Thu 2025-02-13 19:40:43.648338 UTC. Feb 13 19:40:43.347597 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:40:43.358579 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:40:43.361015 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:40:43.385920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:40:43.387909 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:40:43.415405 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:40:43.416543 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:40:43.417362 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:40:43.418181 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:40:43.419161 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:40:43.420259 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:40:43.421221 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:40:43.422176 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:40:43.423086 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:40:43.423118 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:40:43.423786 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:40:43.425172 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:40:43.427261 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:40:43.436257 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:40:43.438258 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:40:43.439556 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:40:43.440443 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:40:43.441141 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:40:43.441890 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:40:43.441920 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:40:43.442811 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:40:43.444542 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:40:43.447443 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:40:43.447618 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:40:43.450539 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:40:43.451270 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:40:43.453822 jq[1412]: false Feb 13 19:40:43.454512 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:40:43.457487 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:40:43.460926 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:40:43.467916 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:40:43.469618 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:40:43.470033 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:40:43.470876 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:40:43.473546 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:40:43.475817 extend-filesystems[1413]: Found loop3 Feb 13 19:40:43.475817 extend-filesystems[1413]: Found loop4 Feb 13 19:40:43.478111 extend-filesystems[1413]: Found loop5 Feb 13 19:40:43.478111 extend-filesystems[1413]: Found vda Feb 13 19:40:43.478111 extend-filesystems[1413]: Found vda1 Feb 13 19:40:43.478111 extend-filesystems[1413]: Found vda2 Feb 13 19:40:43.478111 extend-filesystems[1413]: Found vda3 Feb 13 19:40:43.478111 extend-filesystems[1413]: Found usr Feb 13 19:40:43.478111 extend-filesystems[1413]: Found vda4 Feb 13 19:40:43.478111 extend-filesystems[1413]: Found vda6 Feb 13 19:40:43.478111 extend-filesystems[1413]: Found vda7 Feb 13 19:40:43.478111 extend-filesystems[1413]: Found vda9 Feb 13 19:40:43.478111 extend-filesystems[1413]: Checking size of /dev/vda9 Feb 13 19:40:43.477493 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:40:43.480387 dbus-daemon[1411]: [system] SELinux support is enabled Feb 13 19:40:43.482510 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:40:43.486341 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:40:43.509115 jq[1426]: true Feb 13 19:40:43.486523 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:40:43.486818 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:40:43.486950 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:40:43.488300 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:40:43.488534 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:40:43.505560 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:40:43.505608 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:40:43.507141 (ntainerd)[1433]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:40:43.507397 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:40:43.507417 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:40:43.522396 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1359) Feb 13 19:40:43.522458 jq[1430]: true Feb 13 19:40:43.529453 update_engine[1425]: I20250213 19:40:43.527487 1425 main.cc:92] Flatcar Update Engine starting Feb 13 19:40:43.530908 extend-filesystems[1413]: Resized partition /dev/vda9 Feb 13 19:40:43.532540 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:40:43.535111 update_engine[1425]: I20250213 19:40:43.531159 1425 update_check_scheduler.cc:74] Next update check in 6m28s Feb 13 19:40:43.537403 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:40:43.540388 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:40:43.544616 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:40:43.546435 systemd-logind[1421]: New seat seat0. Feb 13 19:40:43.553484 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:40:43.554473 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:40:43.570340 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:40:43.585221 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:40:43.585221 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:40:43.585221 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:40:43.590403 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Feb 13 19:40:43.586809 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:40:43.587024 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:40:43.595632 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:40:43.597377 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:40:43.599081 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:40:43.603375 locksmithd[1448]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:40:43.702804 containerd[1433]: time="2025-02-13T19:40:43.702706240Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:40:43.729928 containerd[1433]: time="2025-02-13T19:40:43.729882880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:43.731250 containerd[1433]: time="2025-02-13T19:40:43.731216120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:40:43.731279 containerd[1433]: time="2025-02-13T19:40:43.731255760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:40:43.731279 containerd[1433]: time="2025-02-13T19:40:43.731272120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:40:43.731472 containerd[1433]: time="2025-02-13T19:40:43.731447720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:40:43.731499 containerd[1433]: time="2025-02-13T19:40:43.731473560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:43.731548 containerd[1433]: time="2025-02-13T19:40:43.731530080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:40:43.731548 containerd[1433]: time="2025-02-13T19:40:43.731545320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:43.731729 containerd[1433]: time="2025-02-13T19:40:43.731708000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:40:43.731729 containerd[1433]: time="2025-02-13T19:40:43.731726680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:43.731788 containerd[1433]: time="2025-02-13T19:40:43.731739760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:40:43.731788 containerd[1433]: time="2025-02-13T19:40:43.731749280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:43.732322 containerd[1433]: time="2025-02-13T19:40:43.731836200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:43.732322 containerd[1433]: time="2025-02-13T19:40:43.732042200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:43.732322 containerd[1433]: time="2025-02-13T19:40:43.732147040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:40:43.732322 containerd[1433]: time="2025-02-13T19:40:43.732162640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:40:43.732322 containerd[1433]: time="2025-02-13T19:40:43.732240520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:40:43.732322 containerd[1433]: time="2025-02-13T19:40:43.732277520Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:40:43.736389 containerd[1433]: time="2025-02-13T19:40:43.736353720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:40:43.736433 containerd[1433]: time="2025-02-13T19:40:43.736410080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:40:43.736433 containerd[1433]: time="2025-02-13T19:40:43.736427320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:40:43.736467 containerd[1433]: time="2025-02-13T19:40:43.736444280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:40:43.736467 containerd[1433]: time="2025-02-13T19:40:43.736458320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:40:43.736629 containerd[1433]: time="2025-02-13T19:40:43.736607520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:40:43.736872 containerd[1433]: time="2025-02-13T19:40:43.736853360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:40:43.736976 containerd[1433]: time="2025-02-13T19:40:43.736958280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:40:43.737006 containerd[1433]: time="2025-02-13T19:40:43.736979240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:40:43.737006 containerd[1433]: time="2025-02-13T19:40:43.736993280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:40:43.737044 containerd[1433]: time="2025-02-13T19:40:43.737006680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:40:43.737044 containerd[1433]: time="2025-02-13T19:40:43.737020440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:40:43.737044 containerd[1433]: time="2025-02-13T19:40:43.737032760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:40:43.737092 containerd[1433]: time="2025-02-13T19:40:43.737050800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:40:43.737092 containerd[1433]: time="2025-02-13T19:40:43.737065680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:40:43.737092 containerd[1433]: time="2025-02-13T19:40:43.737077880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:40:43.737092 containerd[1433]: time="2025-02-13T19:40:43.737090160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:40:43.737157 containerd[1433]: time="2025-02-13T19:40:43.737102960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:40:43.737157 containerd[1433]: time="2025-02-13T19:40:43.737122960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737157 containerd[1433]: time="2025-02-13T19:40:43.737137320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737157 containerd[1433]: time="2025-02-13T19:40:43.737149440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737226 containerd[1433]: time="2025-02-13T19:40:43.737161280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737226 containerd[1433]: time="2025-02-13T19:40:43.737178400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737226 containerd[1433]: time="2025-02-13T19:40:43.737190920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737226 containerd[1433]: time="2025-02-13T19:40:43.737202360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737226 containerd[1433]: time="2025-02-13T19:40:43.737216320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737310 containerd[1433]: time="2025-02-13T19:40:43.737233240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737310 containerd[1433]: time="2025-02-13T19:40:43.737249120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737310 containerd[1433]: time="2025-02-13T19:40:43.737260880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737310 containerd[1433]: time="2025-02-13T19:40:43.737272680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737310 containerd[1433]: time="2025-02-13T19:40:43.737283840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737310 containerd[1433]: time="2025-02-13T19:40:43.737302640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:40:43.737432 containerd[1433]: time="2025-02-13T19:40:43.737340080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737432 containerd[1433]: time="2025-02-13T19:40:43.737354760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.737432 containerd[1433]: time="2025-02-13T19:40:43.737371320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:40:43.738339 containerd[1433]: time="2025-02-13T19:40:43.737482120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:40:43.738339 containerd[1433]: time="2025-02-13T19:40:43.737501400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:40:43.738339 containerd[1433]: time="2025-02-13T19:40:43.737513200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:40:43.738339 containerd[1433]: time="2025-02-13T19:40:43.737524680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:40:43.738339 containerd[1433]: time="2025-02-13T19:40:43.737533720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.738339 containerd[1433]: time="2025-02-13T19:40:43.737545800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:40:43.738339 containerd[1433]: time="2025-02-13T19:40:43.737555960Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:40:43.738339 containerd[1433]: time="2025-02-13T19:40:43.737568240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:40:43.738628 containerd[1433]: time="2025-02-13T19:40:43.737903400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:40:43.738628 containerd[1433]: time="2025-02-13T19:40:43.737958840Z" level=info msg="Connect containerd service" Feb 13 19:40:43.738628 containerd[1433]: time="2025-02-13T19:40:43.737982800Z" level=info msg="using legacy CRI server" Feb 13 19:40:43.738628 containerd[1433]: time="2025-02-13T19:40:43.737988880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:40:43.738628 containerd[1433]: time="2025-02-13T19:40:43.738066720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:40:43.738885 containerd[1433]: time="2025-02-13T19:40:43.738676880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:40:43.739176 containerd[1433]: time="2025-02-13T19:40:43.739040120Z" level=info msg="Start subscribing containerd event" Feb 13 19:40:43.739176 containerd[1433]: time="2025-02-13T19:40:43.739096680Z" level=info msg="Start recovering state" Feb 13 19:40:43.739176 containerd[1433]: time="2025-02-13T19:40:43.739155560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:40:43.739270 containerd[1433]: time="2025-02-13T19:40:43.739158400Z" level=info msg="Start event monitor" Feb 13 19:40:43.739270 containerd[1433]: time="2025-02-13T19:40:43.739195760Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:40:43.739270 containerd[1433]: time="2025-02-13T19:40:43.739225040Z" level=info msg="Start snapshots syncer" Feb 13 19:40:43.739270 containerd[1433]: time="2025-02-13T19:40:43.739250640Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:40:43.739270 containerd[1433]: time="2025-02-13T19:40:43.739261840Z" level=info msg="Start streaming server" Feb 13 19:40:43.740281 containerd[1433]: time="2025-02-13T19:40:43.739424520Z" level=info msg="containerd successfully booted in 0.038882s" Feb 13 19:40:43.739508 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:40:44.576076 systemd-networkd[1380]: eth0: Gained IPv6LL Feb 13 19:40:44.580473 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:40:44.582107 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:40:44.591103 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:40:44.593644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:40:44.596189 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:40:44.620005 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:40:44.620250 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:40:44.622062 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:40:44.622496 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:40:45.049310 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:40:45.069220 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:40:45.077694 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:40:45.084232 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:40:45.084455 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:40:45.088278 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:40:45.092228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:45.096234 (kubelet)[1514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:40:45.102114 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:40:45.105888 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:40:45.107947 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:40:45.109089 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:40:45.109935 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:40:45.111452 systemd[1]: Startup finished in 552ms (kernel) + 4.163s (initrd) + 3.274s (userspace) = 7.991s. Feb 13 19:40:45.558819 kubelet[1514]: E0213 19:40:45.558759 1514 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:40:45.561484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:40:45.561632 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:40:50.319003 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:40:50.320248 systemd[1]: Started sshd@0-10.0.0.72:22-10.0.0.1:33570.service - OpenSSH per-connection server daemon (10.0.0.1:33570). Feb 13 19:40:50.376065 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 33570 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:50.377812 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:50.386022 systemd-logind[1421]: New session 1 of user core. Feb 13 19:40:50.387098 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:40:50.396631 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:40:50.406445 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:40:50.408773 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:40:50.415669 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:40:50.503964 systemd[1535]: Queued start job for default target default.target. Feb 13 19:40:50.516286 systemd[1535]: Created slice app.slice - User Application Slice. Feb 13 19:40:50.516348 systemd[1535]: Reached target paths.target - Paths. Feb 13 19:40:50.516364 systemd[1535]: Reached target timers.target - Timers. Feb 13 19:40:50.517632 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:40:50.527646 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:40:50.527708 systemd[1535]: Reached target sockets.target - Sockets. Feb 13 19:40:50.527721 systemd[1535]: Reached target basic.target - Basic System. Feb 13 19:40:50.527758 systemd[1535]: Reached target default.target - Main User Target. Feb 13 19:40:50.527785 systemd[1535]: Startup finished in 103ms. Feb 13 19:40:50.528033 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:40:50.529354 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:40:50.587942 systemd[1]: Started sshd@1-10.0.0.72:22-10.0.0.1:33580.service - OpenSSH per-connection server daemon (10.0.0.1:33580). Feb 13 19:40:50.646940 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 33580 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:50.647565 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:50.651420 systemd-logind[1421]: New session 2 of user core. Feb 13 19:40:50.662480 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:40:50.715382 sshd[1546]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:50.728684 systemd[1]: sshd@1-10.0.0.72:22-10.0.0.1:33580.service: Deactivated successfully. Feb 13 19:40:50.731625 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:40:50.733461 systemd-logind[1421]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:40:50.739567 systemd[1]: Started sshd@2-10.0.0.72:22-10.0.0.1:33594.service - OpenSSH per-connection server daemon (10.0.0.1:33594). Feb 13 19:40:50.742731 systemd-logind[1421]: Removed session 2. Feb 13 19:40:50.771001 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 33594 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:50.772528 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:50.776156 systemd-logind[1421]: New session 3 of user core. Feb 13 19:40:50.784474 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:40:50.833428 sshd[1553]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:50.847629 systemd[1]: sshd@2-10.0.0.72:22-10.0.0.1:33594.service: Deactivated successfully. Feb 13 19:40:50.849564 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:40:50.850793 systemd-logind[1421]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:40:50.851898 systemd[1]: Started sshd@3-10.0.0.72:22-10.0.0.1:33598.service - OpenSSH per-connection server daemon (10.0.0.1:33598). Feb 13 19:40:50.852656 systemd-logind[1421]: Removed session 3. Feb 13 19:40:50.886471 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 33598 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:50.887667 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:50.891381 systemd-logind[1421]: New session 4 of user core. Feb 13 19:40:50.901465 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:40:50.953187 sshd[1560]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:50.962356 systemd[1]: sshd@3-10.0.0.72:22-10.0.0.1:33598.service: Deactivated successfully. Feb 13 19:40:50.963507 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:40:50.965498 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:40:50.975572 systemd[1]: Started sshd@4-10.0.0.72:22-10.0.0.1:33602.service - OpenSSH per-connection server daemon (10.0.0.1:33602). Feb 13 19:40:50.976468 systemd-logind[1421]: Removed session 4. Feb 13 19:40:51.006180 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 33602 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:51.007480 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:51.011398 systemd-logind[1421]: New session 5 of user core. Feb 13 19:40:51.017520 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:40:51.075490 sudo[1570]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:40:51.075787 sudo[1570]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:40:51.091209 sudo[1570]: pam_unix(sudo:session): session closed for user root Feb 13 19:40:51.092977 sshd[1567]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:51.110769 systemd[1]: sshd@4-10.0.0.72:22-10.0.0.1:33602.service: Deactivated successfully. Feb 13 19:40:51.112751 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:40:51.115416 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:40:51.126580 systemd[1]: Started sshd@5-10.0.0.72:22-10.0.0.1:33612.service - OpenSSH per-connection server daemon (10.0.0.1:33612). Feb 13 19:40:51.127762 systemd-logind[1421]: Removed session 5. Feb 13 19:40:51.158458 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 33612 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:51.159642 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:51.162985 systemd-logind[1421]: New session 6 of user core. Feb 13 19:40:51.178542 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:40:51.229684 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:40:51.229957 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:40:51.232911 sudo[1579]: pam_unix(sudo:session): session closed for user root Feb 13 19:40:51.237300 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:40:51.237582 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:40:51.255636 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:40:51.256788 auditctl[1582]: No rules Feb 13 19:40:51.257061 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:40:51.257216 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:40:51.259221 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:40:51.281709 augenrules[1600]: No rules Feb 13 19:40:51.282805 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:40:51.283799 sudo[1578]: pam_unix(sudo:session): session closed for user root Feb 13 19:40:51.285243 sshd[1575]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:51.295576 systemd[1]: sshd@5-10.0.0.72:22-10.0.0.1:33612.service: Deactivated successfully. Feb 13 19:40:51.297053 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:40:51.300146 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:40:51.311709 systemd[1]: Started sshd@6-10.0.0.72:22-10.0.0.1:33614.service - OpenSSH per-connection server daemon (10.0.0.1:33614). Feb 13 19:40:51.312611 systemd-logind[1421]: Removed session 6. Feb 13 19:40:51.343160 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 33614 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:40:51.343633 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:51.347388 systemd-logind[1421]: New session 7 of user core. Feb 13 19:40:51.361537 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:40:51.411967 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:40:51.412240 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:40:51.430627 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:40:51.444999 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:40:51.445182 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:40:51.952009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:51.964559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:40:51.981623 systemd[1]: Reloading requested from client PID 1661 ('systemctl') (unit session-7.scope)... Feb 13 19:40:51.981640 systemd[1]: Reloading... Feb 13 19:40:52.044361 zram_generator::config[1700]: No configuration found. Feb 13 19:40:52.231399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:40:52.282769 systemd[1]: Reloading finished in 300 ms. Feb 13 19:40:52.321535 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:40:52.321598 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:40:52.322405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:52.324510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:40:52.416490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:52.420878 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:40:52.458175 kubelet[1745]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:40:52.458175 kubelet[1745]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:40:52.458175 kubelet[1745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:40:52.459106 kubelet[1745]: I0213 19:40:52.459053 1745 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:40:53.033627 kubelet[1745]: I0213 19:40:53.033578 1745 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:40:53.033627 kubelet[1745]: I0213 19:40:53.033609 1745 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:40:53.033831 kubelet[1745]: I0213 19:40:53.033812 1745 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:40:53.056861 kubelet[1745]: I0213 19:40:53.056811 1745 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:40:53.072412 kubelet[1745]: I0213 19:40:53.072369 1745 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:40:53.073492 kubelet[1745]: I0213 19:40:53.073442 1745 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:40:53.073708 kubelet[1745]: I0213 19:40:53.073491 1745 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.72","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:40:53.073792 kubelet[1745]: I0213 19:40:53.073775 1745 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:40:53.073792 kubelet[1745]: I0213 19:40:53.073786 1745 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:40:53.074061 kubelet[1745]: I0213 19:40:53.074037 1745 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:40:53.074891 kubelet[1745]: I0213 19:40:53.074840 1745 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:40:53.074891 kubelet[1745]: I0213 19:40:53.074862 1745 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:40:53.075055 kubelet[1745]: I0213 19:40:53.075032 1745 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:40:53.075163 kubelet[1745]: I0213 19:40:53.075151 1745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:40:53.075257 kubelet[1745]: E0213 19:40:53.075221 1745 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:40:53.075361 kubelet[1745]: E0213 19:40:53.075270 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:40:53.076350 kubelet[1745]: I0213 19:40:53.076332 1745 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:40:53.077171 kubelet[1745]: I0213 19:40:53.076784 1745 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:40:53.077171 kubelet[1745]: W0213 19:40:53.076920 1745 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:40:53.078073 kubelet[1745]: I0213 19:40:53.077880 1745 server.go:1264] "Started kubelet" Feb 13 19:40:53.078372 kubelet[1745]: I0213 19:40:53.078347 1745 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:40:53.079366 kubelet[1745]: I0213 19:40:53.078911 1745 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:40:53.079366 kubelet[1745]: I0213 19:40:53.079201 1745 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:40:53.079477 kubelet[1745]: I0213 19:40:53.079398 1745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:40:53.079506 kubelet[1745]: I0213 19:40:53.079496 1745 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:40:53.081676 kubelet[1745]: E0213 19:40:53.081644 1745 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.72\" not found" Feb 13 19:40:53.081858 kubelet[1745]: I0213 19:40:53.081846 1745 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:40:53.082009 kubelet[1745]: I0213 19:40:53.081996 1745 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:40:53.082982 kubelet[1745]: I0213 19:40:53.082961 1745 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:40:53.084393 kubelet[1745]: I0213 19:40:53.084354 1745 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:40:53.085082 kubelet[1745]: E0213 19:40:53.085051 1745 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.72\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:40:53.085400 kubelet[1745]: W0213 19:40:53.085371 1745 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:40:53.085400 kubelet[1745]: E0213 19:40:53.085404 1745 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:40:53.085824 kubelet[1745]: E0213 19:40:53.085703 1745 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:40:53.085824 kubelet[1745]: W0213 19:40:53.085715 1745 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:40:53.085824 kubelet[1745]: E0213 19:40:53.085740 1745 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:40:53.085902 kubelet[1745]: W0213 19:40:53.085846 1745 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.72" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:40:53.085902 kubelet[1745]: E0213 19:40:53.085864 1745 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.72" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:40:53.086877 kubelet[1745]: I0213 19:40:53.086852 1745 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:40:53.087045 kubelet[1745]: I0213 19:40:53.086990 1745 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:40:53.088463 kubelet[1745]: E0213 19:40:53.084845 1745 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.72.1823dbe3ed255eae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.72,UID:10.0.0.72,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.72,},FirstTimestamp:2025-02-13 19:40:53.077851822 +0000 UTC m=+0.653818951,LastTimestamp:2025-02-13 19:40:53.077851822 +0000 UTC m=+0.653818951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.72,}" Feb 13 19:40:53.098273 kubelet[1745]: I0213 19:40:53.098241 1745 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:40:53.098273 kubelet[1745]: I0213 19:40:53.098255 1745 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:40:53.098273 kubelet[1745]: I0213 19:40:53.098273 1745 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:40:53.164508 kubelet[1745]: I0213 19:40:53.164457 1745 policy_none.go:49] "None policy: Start" Feb 13 19:40:53.165150 kubelet[1745]: I0213 19:40:53.165133 1745 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:40:53.165210 kubelet[1745]: I0213 19:40:53.165162 1745 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:40:53.175712 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:40:53.183670 kubelet[1745]: I0213 19:40:53.183646 1745 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.72" Feb 13 19:40:53.184874 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:40:53.188352 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:40:53.189241 kubelet[1745]: I0213 19:40:53.189125 1745 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.72" Feb 13 19:40:53.190734 kubelet[1745]: I0213 19:40:53.190614 1745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:40:53.191810 kubelet[1745]: I0213 19:40:53.191793 1745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:40:53.193354 kubelet[1745]: I0213 19:40:53.193203 1745 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:40:53.193437 kubelet[1745]: I0213 19:40:53.193348 1745 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:40:53.193437 kubelet[1745]: E0213 19:40:53.193427 1745 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:40:53.195309 kubelet[1745]: I0213 19:40:53.195173 1745 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:40:53.195439 kubelet[1745]: I0213 19:40:53.195381 1745 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:40:53.195737 kubelet[1745]: I0213 19:40:53.195640 1745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:40:53.197657 kubelet[1745]: E0213 19:40:53.197614 1745 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.72\" not found" Feb 13 19:40:53.201453 kubelet[1745]: E0213 19:40:53.201427 1745 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.72\" not found" Feb 13 19:40:53.302078 kubelet[1745]: E0213 19:40:53.301947 1745 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.72\" not found" Feb 13 19:40:53.402112 kubelet[1745]: E0213 19:40:53.402055 1745 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.72\" not found" Feb 13 19:40:53.435741 sudo[1612]: pam_unix(sudo:session): session closed for user root Feb 13 19:40:53.437211 sshd[1608]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:53.440725 systemd[1]: sshd@6-10.0.0.72:22-10.0.0.1:33614.service: Deactivated successfully. Feb 13 19:40:53.442488 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:40:53.443127 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:40:53.444035 systemd-logind[1421]: Removed session 7. Feb 13 19:40:53.502337 kubelet[1745]: E0213 19:40:53.502288 1745 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.72\" not found" Feb 13 19:40:53.603245 kubelet[1745]: E0213 19:40:53.603131 1745 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.72\" not found" Feb 13 19:40:53.703957 kubelet[1745]: E0213 19:40:53.703918 1745 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.72\" not found" Feb 13 19:40:53.804560 kubelet[1745]: E0213 19:40:53.804522 1745 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.72\" not found" Feb 13 19:40:53.905199 kubelet[1745]: E0213 19:40:53.905121 1745 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.72\" not found" Feb 13 19:40:54.006332 kubelet[1745]: I0213 19:40:54.006264 1745 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:40:54.006774 containerd[1433]: time="2025-02-13T19:40:54.006680358Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:40:54.007074 kubelet[1745]: I0213 19:40:54.006865 1745 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:40:54.035970 kubelet[1745]: I0213 19:40:54.035930 1745 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:40:54.036073 kubelet[1745]: W0213 19:40:54.036051 1745 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:40:54.036150 kubelet[1745]: W0213 19:40:54.036082 1745 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:40:54.075883 kubelet[1745]: I0213 19:40:54.075843 1745 apiserver.go:52] "Watching apiserver" Feb 13 19:40:54.076162 kubelet[1745]: E0213 19:40:54.076143 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:40:54.093674 kubelet[1745]: I0213 19:40:54.093585 1745 topology_manager.go:215] "Topology Admit Handler" podUID="6e41b3fc-e38b-4a8e-93dd-0bb1c18025bc" podNamespace="kube-system" podName="kube-proxy-q8l5l" Feb 13 19:40:54.093674 kubelet[1745]: I0213 19:40:54.093668 1745 topology_manager.go:215] "Topology Admit Handler" podUID="93ee06e7-102a-46b7-a9ad-200a00887cff" podNamespace="kube-system" podName="cilium-km8tj" Feb 13 19:40:54.099115 systemd[1]: Created slice kubepods-burstable-pod93ee06e7_102a_46b7_a9ad_200a00887cff.slice - libcontainer container kubepods-burstable-pod93ee06e7_102a_46b7_a9ad_200a00887cff.slice. Feb 13 19:40:54.121232 systemd[1]: Created slice kubepods-besteffort-pod6e41b3fc_e38b_4a8e_93dd_0bb1c18025bc.slice - libcontainer container kubepods-besteffort-pod6e41b3fc_e38b_4a8e_93dd_0bb1c18025bc.slice. Feb 13 19:40:54.183401 kubelet[1745]: I0213 19:40:54.183264 1745 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:40:54.190026 kubelet[1745]: I0213 19:40:54.189989 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e41b3fc-e38b-4a8e-93dd-0bb1c18025bc-lib-modules\") pod \"kube-proxy-q8l5l\" (UID: \"6e41b3fc-e38b-4a8e-93dd-0bb1c18025bc\") " pod="kube-system/kube-proxy-q8l5l" Feb 13 19:40:54.190113 kubelet[1745]: I0213 19:40:54.190028 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f45kd\" (UniqueName: \"kubernetes.io/projected/6e41b3fc-e38b-4a8e-93dd-0bb1c18025bc-kube-api-access-f45kd\") pod \"kube-proxy-q8l5l\" (UID: \"6e41b3fc-e38b-4a8e-93dd-0bb1c18025bc\") " pod="kube-system/kube-proxy-q8l5l" Feb 13 19:40:54.190113 kubelet[1745]: I0213 19:40:54.190051 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-run\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190113 kubelet[1745]: I0213 19:40:54.190068 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cni-path\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190113 kubelet[1745]: I0213 19:40:54.190086 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-etc-cni-netd\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190113 kubelet[1745]: I0213 19:40:54.190101 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x8mz\" (UniqueName: \"kubernetes.io/projected/93ee06e7-102a-46b7-a9ad-200a00887cff-kube-api-access-2x8mz\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190227 kubelet[1745]: I0213 19:40:54.190129 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e41b3fc-e38b-4a8e-93dd-0bb1c18025bc-xtables-lock\") pod \"kube-proxy-q8l5l\" (UID: \"6e41b3fc-e38b-4a8e-93dd-0bb1c18025bc\") " pod="kube-system/kube-proxy-q8l5l" Feb 13 19:40:54.190227 kubelet[1745]: I0213 19:40:54.190145 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-bpf-maps\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190227 kubelet[1745]: I0213 19:40:54.190160 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93ee06e7-102a-46b7-a9ad-200a00887cff-clustermesh-secrets\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190227 kubelet[1745]: I0213 19:40:54.190173 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-host-proc-sys-net\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190227 kubelet[1745]: I0213 19:40:54.190190 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-config-path\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190347 kubelet[1745]: I0213 19:40:54.190205 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-host-proc-sys-kernel\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190347 kubelet[1745]: I0213 19:40:54.190222 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e41b3fc-e38b-4a8e-93dd-0bb1c18025bc-kube-proxy\") pod \"kube-proxy-q8l5l\" (UID: \"6e41b3fc-e38b-4a8e-93dd-0bb1c18025bc\") " pod="kube-system/kube-proxy-q8l5l" Feb 13 19:40:54.190347 kubelet[1745]: I0213 19:40:54.190236 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-hostproc\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190347 kubelet[1745]: I0213 19:40:54.190250 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-cgroup\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190347 kubelet[1745]: I0213 19:40:54.190265 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-xtables-lock\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190347 kubelet[1745]: I0213 19:40:54.190283 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-lib-modules\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.190473 kubelet[1745]: I0213 19:40:54.190299 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93ee06e7-102a-46b7-a9ad-200a00887cff-hubble-tls\") pod \"cilium-km8tj\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " pod="kube-system/cilium-km8tj" Feb 13 19:40:54.419645 kubelet[1745]: E0213 19:40:54.419612 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:54.420271 containerd[1433]: time="2025-02-13T19:40:54.420234875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-km8tj,Uid:93ee06e7-102a-46b7-a9ad-200a00887cff,Namespace:kube-system,Attempt:0,}" Feb 13 19:40:54.434461 kubelet[1745]: E0213 19:40:54.434045 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:54.434655 containerd[1433]: time="2025-02-13T19:40:54.434549892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q8l5l,Uid:6e41b3fc-e38b-4a8e-93dd-0bb1c18025bc,Namespace:kube-system,Attempt:0,}" Feb 13 19:40:54.964931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2601720648.mount: Deactivated successfully. Feb 13 19:40:54.971062 containerd[1433]: time="2025-02-13T19:40:54.971016091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:40:54.972249 containerd[1433]: time="2025-02-13T19:40:54.972046156Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:40:54.972458 containerd[1433]: time="2025-02-13T19:40:54.972432284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:40:54.973271 containerd[1433]: time="2025-02-13T19:40:54.973188130Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:40:54.973966 containerd[1433]: time="2025-02-13T19:40:54.973741214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:40:54.975371 containerd[1433]: time="2025-02-13T19:40:54.975297001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:40:54.977967 containerd[1433]: time="2025-02-13T19:40:54.977674754Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.304488ms" Feb 13 19:40:54.978960 containerd[1433]: time="2025-02-13T19:40:54.978931629Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.312102ms" Feb 13 19:40:55.076868 kubelet[1745]: E0213 19:40:55.076822 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:40:55.093349 containerd[1433]: time="2025-02-13T19:40:55.092799025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:40:55.093349 containerd[1433]: time="2025-02-13T19:40:55.092846210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:40:55.093349 containerd[1433]: time="2025-02-13T19:40:55.092868450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:40:55.093349 containerd[1433]: time="2025-02-13T19:40:55.092946795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:40:55.094068 containerd[1433]: time="2025-02-13T19:40:55.093999068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:40:55.094481 containerd[1433]: time="2025-02-13T19:40:55.094048958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:40:55.094481 containerd[1433]: time="2025-02-13T19:40:55.094164074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:40:55.094481 containerd[1433]: time="2025-02-13T19:40:55.094238585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:40:55.177567 systemd[1]: Started cri-containerd-3f897944096314e62473b19d51341289cefbc6f02c8a9f97301e87dc636175e4.scope - libcontainer container 3f897944096314e62473b19d51341289cefbc6f02c8a9f97301e87dc636175e4. Feb 13 19:40:55.179082 systemd[1]: Started cri-containerd-ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c.scope - libcontainer container ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c. Feb 13 19:40:55.198953 containerd[1433]: time="2025-02-13T19:40:55.198571391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-km8tj,Uid:93ee06e7-102a-46b7-a9ad-200a00887cff,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\"" Feb 13 19:40:55.200996 kubelet[1745]: E0213 19:40:55.200973 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:55.202239 containerd[1433]: time="2025-02-13T19:40:55.202205508Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:40:55.204257 containerd[1433]: time="2025-02-13T19:40:55.204224566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q8l5l,Uid:6e41b3fc-e38b-4a8e-93dd-0bb1c18025bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f897944096314e62473b19d51341289cefbc6f02c8a9f97301e87dc636175e4\"" Feb 13 19:40:55.205628 kubelet[1745]: E0213 19:40:55.205552 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:56.077633 kubelet[1745]: E0213 19:40:56.077591 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:40:57.078539 kubelet[1745]: E0213 19:40:57.078502 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:40:57.675982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558434061.mount: Deactivated successfully. Feb 13 19:40:58.079663 kubelet[1745]: E0213 19:40:58.079609 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:40:59.080740 kubelet[1745]: E0213 19:40:59.080576 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:40:59.526615 containerd[1433]: time="2025-02-13T19:40:59.526567179Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:59.527555 containerd[1433]: time="2025-02-13T19:40:59.527515442Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:40:59.528459 containerd[1433]: time="2025-02-13T19:40:59.528385451Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:59.529860 containerd[1433]: time="2025-02-13T19:40:59.529721286Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.327475942s" Feb 13 19:40:59.529860 containerd[1433]: time="2025-02-13T19:40:59.529754301Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:40:59.531109 containerd[1433]: time="2025-02-13T19:40:59.531070795Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:40:59.534349 containerd[1433]: time="2025-02-13T19:40:59.532388696Z" level=info msg="CreateContainer within sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:40:59.545689 containerd[1433]: time="2025-02-13T19:40:59.545635625Z" level=info msg="CreateContainer within sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e\"" Feb 13 19:40:59.546234 containerd[1433]: time="2025-02-13T19:40:59.546201904Z" level=info msg="StartContainer for \"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e\"" Feb 13 19:40:59.571480 systemd[1]: Started cri-containerd-9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e.scope - libcontainer container 9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e. Feb 13 19:40:59.589726 containerd[1433]: time="2025-02-13T19:40:59.589686281Z" level=info msg="StartContainer for \"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e\" returns successfully" Feb 13 19:40:59.629166 systemd[1]: cri-containerd-9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e.scope: Deactivated successfully. Feb 13 19:40:59.742424 containerd[1433]: time="2025-02-13T19:40:59.742366107Z" level=info msg="shim disconnected" id=9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e namespace=k8s.io Feb 13 19:40:59.742424 containerd[1433]: time="2025-02-13T19:40:59.742416816Z" level=warning msg="cleaning up after shim disconnected" id=9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e namespace=k8s.io Feb 13 19:40:59.742424 containerd[1433]: time="2025-02-13T19:40:59.742425220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:00.080945 kubelet[1745]: E0213 19:41:00.080882 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:00.209592 kubelet[1745]: E0213 19:41:00.209561 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:00.211545 containerd[1433]: time="2025-02-13T19:41:00.211504223Z" level=info msg="CreateContainer within sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:41:00.221176 containerd[1433]: time="2025-02-13T19:41:00.221095024Z" level=info msg="CreateContainer within sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3\"" Feb 13 19:41:00.221627 containerd[1433]: time="2025-02-13T19:41:00.221601091Z" level=info msg="StartContainer for \"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3\"" Feb 13 19:41:00.248468 systemd[1]: Started cri-containerd-349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3.scope - libcontainer container 349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3. Feb 13 19:41:00.269805 containerd[1433]: time="2025-02-13T19:41:00.269736139Z" level=info msg="StartContainer for \"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3\" returns successfully" Feb 13 19:41:00.287089 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:41:00.287569 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:41:00.287859 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:41:00.294748 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:41:00.296250 systemd[1]: cri-containerd-349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3.scope: Deactivated successfully. Feb 13 19:41:00.305644 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:41:00.333540 containerd[1433]: time="2025-02-13T19:41:00.333308666Z" level=info msg="shim disconnected" id=349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3 namespace=k8s.io Feb 13 19:41:00.333540 containerd[1433]: time="2025-02-13T19:41:00.333372682Z" level=warning msg="cleaning up after shim disconnected" id=349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3 namespace=k8s.io Feb 13 19:41:00.333540 containerd[1433]: time="2025-02-13T19:41:00.333381966Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:00.345913 containerd[1433]: time="2025-02-13T19:41:00.345867955Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:41:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:41:00.543154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e-rootfs.mount: Deactivated successfully. Feb 13 19:41:00.555400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334474539.mount: Deactivated successfully. Feb 13 19:41:00.748351 containerd[1433]: time="2025-02-13T19:41:00.748272889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:00.749143 containerd[1433]: time="2025-02-13T19:41:00.749106797Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:41:00.750123 containerd[1433]: time="2025-02-13T19:41:00.749906506Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:00.752301 containerd[1433]: time="2025-02-13T19:41:00.752268581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:00.752851 containerd[1433]: time="2025-02-13T19:41:00.752815799Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.221714126s" Feb 13 19:41:00.752851 containerd[1433]: time="2025-02-13T19:41:00.752849274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:41:00.755117 containerd[1433]: time="2025-02-13T19:41:00.755085646Z" level=info msg="CreateContainer within sandbox \"3f897944096314e62473b19d51341289cefbc6f02c8a9f97301e87dc636175e4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:41:00.765488 containerd[1433]: time="2025-02-13T19:41:00.765440913Z" level=info msg="CreateContainer within sandbox \"3f897944096314e62473b19d51341289cefbc6f02c8a9f97301e87dc636175e4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"620575a9a4f76fc8dc8e47b818040011dec260c036d3f013bf4667101fef5914\"" Feb 13 19:41:00.766033 containerd[1433]: time="2025-02-13T19:41:00.766005813Z" level=info msg="StartContainer for \"620575a9a4f76fc8dc8e47b818040011dec260c036d3f013bf4667101fef5914\"" Feb 13 19:41:00.790483 systemd[1]: Started cri-containerd-620575a9a4f76fc8dc8e47b818040011dec260c036d3f013bf4667101fef5914.scope - libcontainer container 620575a9a4f76fc8dc8e47b818040011dec260c036d3f013bf4667101fef5914. Feb 13 19:41:00.811378 containerd[1433]: time="2025-02-13T19:41:00.811291727Z" level=info msg="StartContainer for \"620575a9a4f76fc8dc8e47b818040011dec260c036d3f013bf4667101fef5914\" returns successfully" Feb 13 19:41:01.081995 kubelet[1745]: E0213 19:41:01.081898 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:01.213040 kubelet[1745]: E0213 19:41:01.212769 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:01.215149 kubelet[1745]: E0213 19:41:01.215124 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:01.216894 containerd[1433]: time="2025-02-13T19:41:01.216854903Z" level=info msg="CreateContainer within sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:41:01.232463 containerd[1433]: time="2025-02-13T19:41:01.232365163Z" level=info msg="CreateContainer within sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be\"" Feb 13 19:41:01.232899 containerd[1433]: time="2025-02-13T19:41:01.232865033Z" level=info msg="StartContainer for \"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be\"" Feb 13 19:41:01.240992 kubelet[1745]: I0213 19:41:01.240932 1745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q8l5l" podStartSLOduration=2.6929751189999998 podStartE2EDuration="8.240915803s" podCreationTimestamp="2025-02-13 19:40:53 +0000 UTC" firstStartedPulling="2025-02-13 19:40:55.205950019 +0000 UTC m=+2.781917148" lastFinishedPulling="2025-02-13 19:41:00.753890704 +0000 UTC m=+8.329857832" observedRunningTime="2025-02-13 19:41:01.224800245 +0000 UTC m=+8.800767373" watchObservedRunningTime="2025-02-13 19:41:01.240915803 +0000 UTC m=+8.816883012" Feb 13 19:41:01.264531 systemd[1]: Started cri-containerd-6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be.scope - libcontainer container 6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be. Feb 13 19:41:01.286411 containerd[1433]: time="2025-02-13T19:41:01.285636715Z" level=info msg="StartContainer for \"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be\" returns successfully" Feb 13 19:41:01.300767 systemd[1]: cri-containerd-6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be.scope: Deactivated successfully. Feb 13 19:41:01.420544 containerd[1433]: time="2025-02-13T19:41:01.420347476Z" level=info msg="shim disconnected" id=6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be namespace=k8s.io Feb 13 19:41:01.420544 containerd[1433]: time="2025-02-13T19:41:01.420402379Z" level=warning msg="cleaning up after shim disconnected" id=6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be namespace=k8s.io Feb 13 19:41:01.420544 containerd[1433]: time="2025-02-13T19:41:01.420413746Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:02.083023 kubelet[1745]: E0213 19:41:02.082964 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:02.218573 kubelet[1745]: E0213 19:41:02.218406 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:02.218573 kubelet[1745]: E0213 19:41:02.218431 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:02.220457 containerd[1433]: time="2025-02-13T19:41:02.220423158Z" level=info msg="CreateContainer within sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:41:02.233920 containerd[1433]: time="2025-02-13T19:41:02.233854704Z" level=info msg="CreateContainer within sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5\"" Feb 13 19:41:02.234462 containerd[1433]: time="2025-02-13T19:41:02.234439262Z" level=info msg="StartContainer for \"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5\"" Feb 13 19:41:02.261545 systemd[1]: Started cri-containerd-277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5.scope - libcontainer container 277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5. Feb 13 19:41:02.279896 systemd[1]: cri-containerd-277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5.scope: Deactivated successfully. Feb 13 19:41:02.281537 containerd[1433]: time="2025-02-13T19:41:02.281502643Z" level=info msg="StartContainer for \"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5\" returns successfully" Feb 13 19:41:02.298957 containerd[1433]: time="2025-02-13T19:41:02.298890252Z" level=info msg="shim disconnected" id=277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5 namespace=k8s.io Feb 13 19:41:02.298957 containerd[1433]: time="2025-02-13T19:41:02.298954802Z" level=warning msg="cleaning up after shim disconnected" id=277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5 namespace=k8s.io Feb 13 19:41:02.298957 containerd[1433]: time="2025-02-13T19:41:02.298964958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:02.542133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5-rootfs.mount: Deactivated successfully. Feb 13 19:41:03.083135 kubelet[1745]: E0213 19:41:03.083092 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:03.221523 kubelet[1745]: E0213 19:41:03.221462 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:03.223460 containerd[1433]: time="2025-02-13T19:41:03.223336740Z" level=info msg="CreateContainer within sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:41:03.239002 containerd[1433]: time="2025-02-13T19:41:03.238953777Z" level=info msg="CreateContainer within sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\"" Feb 13 19:41:03.239728 containerd[1433]: time="2025-02-13T19:41:03.239455419Z" level=info msg="StartContainer for \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\"" Feb 13 19:41:03.269573 systemd[1]: Started cri-containerd-eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811.scope - libcontainer container eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811. Feb 13 19:41:03.289704 containerd[1433]: time="2025-02-13T19:41:03.289663538Z" level=info msg="StartContainer for \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\" returns successfully" Feb 13 19:41:03.387312 kubelet[1745]: I0213 19:41:03.387219 1745 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:41:03.780353 kernel: Initializing XFRM netlink socket Feb 13 19:41:04.084301 kubelet[1745]: E0213 19:41:04.084166 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:04.225979 kubelet[1745]: E0213 19:41:04.225933 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:04.244682 kubelet[1745]: I0213 19:41:04.244606 1745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-km8tj" podStartSLOduration=6.915777283 podStartE2EDuration="11.244588524s" podCreationTimestamp="2025-02-13 19:40:53 +0000 UTC" firstStartedPulling="2025-02-13 19:40:55.201741249 +0000 UTC m=+2.777708377" lastFinishedPulling="2025-02-13 19:40:59.530552489 +0000 UTC m=+7.106519618" observedRunningTime="2025-02-13 19:41:04.243802663 +0000 UTC m=+11.819769791" watchObservedRunningTime="2025-02-13 19:41:04.244588524 +0000 UTC m=+11.820555652" Feb 13 19:41:04.322211 kubelet[1745]: I0213 19:41:04.322171 1745 topology_manager.go:215] "Topology Admit Handler" podUID="b73c3523-e6c7-4027-9e75-c324ce3c57d7" podNamespace="default" podName="nginx-deployment-85f456d6dd-m7hcr" Feb 13 19:41:04.328873 systemd[1]: Created slice kubepods-besteffort-podb73c3523_e6c7_4027_9e75_c324ce3c57d7.slice - libcontainer container kubepods-besteffort-podb73c3523_e6c7_4027_9e75_c324ce3c57d7.slice. Feb 13 19:41:04.360579 kubelet[1745]: I0213 19:41:04.360464 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz4ng\" (UniqueName: \"kubernetes.io/projected/b73c3523-e6c7-4027-9e75-c324ce3c57d7-kube-api-access-wz4ng\") pod \"nginx-deployment-85f456d6dd-m7hcr\" (UID: \"b73c3523-e6c7-4027-9e75-c324ce3c57d7\") " pod="default/nginx-deployment-85f456d6dd-m7hcr" Feb 13 19:41:04.633060 containerd[1433]: time="2025-02-13T19:41:04.632908231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-m7hcr,Uid:b73c3523-e6c7-4027-9e75-c324ce3c57d7,Namespace:default,Attempt:0,}" Feb 13 19:41:05.084797 kubelet[1745]: E0213 19:41:05.084747 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:05.227870 kubelet[1745]: E0213 19:41:05.227770 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:05.403555 systemd-networkd[1380]: cilium_host: Link UP Feb 13 19:41:05.403680 systemd-networkd[1380]: cilium_net: Link UP Feb 13 19:41:05.403801 systemd-networkd[1380]: cilium_net: Gained carrier Feb 13 19:41:05.403922 systemd-networkd[1380]: cilium_host: Gained carrier Feb 13 19:41:05.479136 systemd-networkd[1380]: cilium_vxlan: Link UP Feb 13 19:41:05.479229 systemd-networkd[1380]: cilium_vxlan: Gained carrier Feb 13 19:41:05.768353 kernel: NET: Registered PF_ALG protocol family Feb 13 19:41:06.079482 systemd-networkd[1380]: cilium_net: Gained IPv6LL Feb 13 19:41:06.085415 kubelet[1745]: E0213 19:41:06.085378 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:06.229208 kubelet[1745]: E0213 19:41:06.229180 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:06.272476 systemd-networkd[1380]: cilium_host: Gained IPv6LL Feb 13 19:41:06.308755 systemd-networkd[1380]: lxc_health: Link UP Feb 13 19:41:06.320298 systemd-networkd[1380]: lxc_health: Gained carrier Feb 13 19:41:06.691226 systemd-networkd[1380]: lxc2e871582d9d4: Link UP Feb 13 19:41:06.699349 kernel: eth0: renamed from tmp21181 Feb 13 19:41:06.706346 systemd-networkd[1380]: lxc2e871582d9d4: Gained carrier Feb 13 19:41:06.720510 systemd-networkd[1380]: cilium_vxlan: Gained IPv6LL Feb 13 19:41:07.085566 kubelet[1745]: E0213 19:41:07.085522 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:07.264894 kubelet[1745]: E0213 19:41:07.264713 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:07.935477 systemd-networkd[1380]: lxc_health: Gained IPv6LL Feb 13 19:41:08.086682 kubelet[1745]: E0213 19:41:08.086631 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:08.383590 systemd-networkd[1380]: lxc2e871582d9d4: Gained IPv6LL Feb 13 19:41:09.087768 kubelet[1745]: E0213 19:41:09.087720 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:10.088397 kubelet[1745]: E0213 19:41:10.088343 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:10.142975 containerd[1433]: time="2025-02-13T19:41:10.142889561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:41:10.142975 containerd[1433]: time="2025-02-13T19:41:10.142945630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:41:10.142975 containerd[1433]: time="2025-02-13T19:41:10.142968618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:10.143391 containerd[1433]: time="2025-02-13T19:41:10.143157049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:10.169496 systemd[1]: Started cri-containerd-21181cd47b20ed65f6883d4b7b1faa24520c81e5d3d5b8d6a900ca7d00557b08.scope - libcontainer container 21181cd47b20ed65f6883d4b7b1faa24520c81e5d3d5b8d6a900ca7d00557b08. Feb 13 19:41:10.179921 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:41:10.194566 containerd[1433]: time="2025-02-13T19:41:10.194530546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-m7hcr,Uid:b73c3523-e6c7-4027-9e75-c324ce3c57d7,Namespace:default,Attempt:0,} returns sandbox id \"21181cd47b20ed65f6883d4b7b1faa24520c81e5d3d5b8d6a900ca7d00557b08\"" Feb 13 19:41:10.196096 containerd[1433]: time="2025-02-13T19:41:10.195978319Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:41:11.089299 kubelet[1745]: E0213 19:41:11.089267 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:12.090081 kubelet[1745]: E0213 19:41:12.090044 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:12.215518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2742440250.mount: Deactivated successfully. Feb 13 19:41:12.957555 containerd[1433]: time="2025-02-13T19:41:12.957506626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:12.958708 containerd[1433]: time="2025-02-13T19:41:12.957965456Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:41:12.959048 containerd[1433]: time="2025-02-13T19:41:12.959017523Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:12.961964 containerd[1433]: time="2025-02-13T19:41:12.961911596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:12.963412 containerd[1433]: time="2025-02-13T19:41:12.963370764Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 2.767346393s" Feb 13 19:41:12.963455 containerd[1433]: time="2025-02-13T19:41:12.963410682Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:41:12.965544 containerd[1433]: time="2025-02-13T19:41:12.965518578Z" level=info msg="CreateContainer within sandbox \"21181cd47b20ed65f6883d4b7b1faa24520c81e5d3d5b8d6a900ca7d00557b08\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:41:12.974274 containerd[1433]: time="2025-02-13T19:41:12.974135498Z" level=info msg="CreateContainer within sandbox \"21181cd47b20ed65f6883d4b7b1faa24520c81e5d3d5b8d6a900ca7d00557b08\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c17cec621a6050dd2837e8d719b36632aa7bb6a483e4597c0975d2842db73c1b\"" Feb 13 19:41:12.974876 containerd[1433]: time="2025-02-13T19:41:12.974840159Z" level=info msg="StartContainer for \"c17cec621a6050dd2837e8d719b36632aa7bb6a483e4597c0975d2842db73c1b\"" Feb 13 19:41:13.003507 systemd[1]: Started cri-containerd-c17cec621a6050dd2837e8d719b36632aa7bb6a483e4597c0975d2842db73c1b.scope - libcontainer container c17cec621a6050dd2837e8d719b36632aa7bb6a483e4597c0975d2842db73c1b. Feb 13 19:41:13.023478 containerd[1433]: time="2025-02-13T19:41:13.023429769Z" level=info msg="StartContainer for \"c17cec621a6050dd2837e8d719b36632aa7bb6a483e4597c0975d2842db73c1b\" returns successfully" Feb 13 19:41:13.076143 kubelet[1745]: E0213 19:41:13.076098 1745 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:13.090502 kubelet[1745]: E0213 19:41:13.090463 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:13.248582 kubelet[1745]: I0213 19:41:13.248440 1745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-m7hcr" podStartSLOduration=6.479868499 podStartE2EDuration="9.248424383s" podCreationTimestamp="2025-02-13 19:41:04 +0000 UTC" firstStartedPulling="2025-02-13 19:41:10.195744432 +0000 UTC m=+17.771711560" lastFinishedPulling="2025-02-13 19:41:12.964300316 +0000 UTC m=+20.540267444" observedRunningTime="2025-02-13 19:41:13.248071414 +0000 UTC m=+20.824038542" watchObservedRunningTime="2025-02-13 19:41:13.248424383 +0000 UTC m=+20.824391512" Feb 13 19:41:14.091294 kubelet[1745]: E0213 19:41:14.091251 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:14.489241 kubelet[1745]: I0213 19:41:14.489194 1745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:41:14.490031 kubelet[1745]: E0213 19:41:14.489999 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:15.091624 kubelet[1745]: E0213 19:41:15.091569 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:15.243859 kubelet[1745]: E0213 19:41:15.243823 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:16.092743 kubelet[1745]: E0213 19:41:16.092694 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:16.529217 kubelet[1745]: I0213 19:41:16.529039 1745 topology_manager.go:215] "Topology Admit Handler" podUID="73336087-2a83-4705-9971-ec06267925fa" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 19:41:16.534875 systemd[1]: Created slice kubepods-besteffort-pod73336087_2a83_4705_9971_ec06267925fa.slice - libcontainer container kubepods-besteffort-pod73336087_2a83_4705_9971_ec06267925fa.slice. Feb 13 19:41:16.640962 kubelet[1745]: I0213 19:41:16.640889 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/73336087-2a83-4705-9971-ec06267925fa-data\") pod \"nfs-server-provisioner-0\" (UID: \"73336087-2a83-4705-9971-ec06267925fa\") " pod="default/nfs-server-provisioner-0" Feb 13 19:41:16.640962 kubelet[1745]: I0213 19:41:16.640937 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfnwp\" (UniqueName: \"kubernetes.io/projected/73336087-2a83-4705-9971-ec06267925fa-kube-api-access-nfnwp\") pod \"nfs-server-provisioner-0\" (UID: \"73336087-2a83-4705-9971-ec06267925fa\") " pod="default/nfs-server-provisioner-0" Feb 13 19:41:16.838896 containerd[1433]: time="2025-02-13T19:41:16.838759820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:73336087-2a83-4705-9971-ec06267925fa,Namespace:default,Attempt:0,}" Feb 13 19:41:16.864982 systemd-networkd[1380]: lxc97fcce05d777: Link UP Feb 13 19:41:16.880362 kernel: eth0: renamed from tmpa62e9 Feb 13 19:41:16.892026 systemd-networkd[1380]: lxc97fcce05d777: Gained carrier Feb 13 19:41:17.070304 containerd[1433]: time="2025-02-13T19:41:17.070202338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:41:17.070304 containerd[1433]: time="2025-02-13T19:41:17.070270947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:41:17.070590 containerd[1433]: time="2025-02-13T19:41:17.070300408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:17.070590 containerd[1433]: time="2025-02-13T19:41:17.070403681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:17.091563 systemd[1]: Started cri-containerd-a62e9f307af04aa7c52f1a587259b31356be3f5f34a23e15d103098868c9d059.scope - libcontainer container a62e9f307af04aa7c52f1a587259b31356be3f5f34a23e15d103098868c9d059. Feb 13 19:41:17.093103 kubelet[1745]: E0213 19:41:17.093050 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:17.100825 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:41:17.116413 containerd[1433]: time="2025-02-13T19:41:17.116330245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:73336087-2a83-4705-9971-ec06267925fa,Namespace:default,Attempt:0,} returns sandbox id \"a62e9f307af04aa7c52f1a587259b31356be3f5f34a23e15d103098868c9d059\"" Feb 13 19:41:17.121599 containerd[1433]: time="2025-02-13T19:41:17.121489645Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:41:18.047620 systemd-networkd[1380]: lxc97fcce05d777: Gained IPv6LL Feb 13 19:41:18.093280 kubelet[1745]: E0213 19:41:18.093207 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:19.038417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901334590.mount: Deactivated successfully. Feb 13 19:41:19.093723 kubelet[1745]: E0213 19:41:19.093690 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:20.094498 kubelet[1745]: E0213 19:41:20.094444 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:20.424442 containerd[1433]: time="2025-02-13T19:41:20.424311983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:20.425293 containerd[1433]: time="2025-02-13T19:41:20.425246660Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Feb 13 19:41:20.434567 containerd[1433]: time="2025-02-13T19:41:20.434335117Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:20.437267 containerd[1433]: time="2025-02-13T19:41:20.437235086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:20.438312 containerd[1433]: time="2025-02-13T19:41:20.438269822Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.31673891s" Feb 13 19:41:20.438394 containerd[1433]: time="2025-02-13T19:41:20.438312448Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:41:20.440929 containerd[1433]: time="2025-02-13T19:41:20.440822384Z" level=info msg="CreateContainer within sandbox \"a62e9f307af04aa7c52f1a587259b31356be3f5f34a23e15d103098868c9d059\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:41:20.452414 containerd[1433]: time="2025-02-13T19:41:20.452375670Z" level=info msg="CreateContainer within sandbox \"a62e9f307af04aa7c52f1a587259b31356be3f5f34a23e15d103098868c9d059\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"e25991629db79457674406482db9bd5a3382c81c7c1e2a2cc2ae4cfde88515ba\"" Feb 13 19:41:20.454253 containerd[1433]: time="2025-02-13T19:41:20.453636862Z" level=info msg="StartContainer for \"e25991629db79457674406482db9bd5a3382c81c7c1e2a2cc2ae4cfde88515ba\"" Feb 13 19:41:20.525565 systemd[1]: Started cri-containerd-e25991629db79457674406482db9bd5a3382c81c7c1e2a2cc2ae4cfde88515ba.scope - libcontainer container e25991629db79457674406482db9bd5a3382c81c7c1e2a2cc2ae4cfde88515ba. Feb 13 19:41:20.546014 containerd[1433]: time="2025-02-13T19:41:20.545943441Z" level=info msg="StartContainer for \"e25991629db79457674406482db9bd5a3382c81c7c1e2a2cc2ae4cfde88515ba\" returns successfully" Feb 13 19:41:21.095060 kubelet[1745]: E0213 19:41:21.095000 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:21.265026 kubelet[1745]: I0213 19:41:21.264963 1745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.946717357 podStartE2EDuration="5.264949358s" podCreationTimestamp="2025-02-13 19:41:16 +0000 UTC" firstStartedPulling="2025-02-13 19:41:17.121145402 +0000 UTC m=+24.697112490" lastFinishedPulling="2025-02-13 19:41:20.439377403 +0000 UTC m=+28.015344491" observedRunningTime="2025-02-13 19:41:21.264030079 +0000 UTC m=+28.839997207" watchObservedRunningTime="2025-02-13 19:41:21.264949358 +0000 UTC m=+28.840916486" Feb 13 19:41:22.095516 kubelet[1745]: E0213 19:41:22.095472 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:23.096114 kubelet[1745]: E0213 19:41:23.096072 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:24.096521 kubelet[1745]: E0213 19:41:24.096472 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:25.097539 kubelet[1745]: E0213 19:41:25.097491 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:26.098063 kubelet[1745]: E0213 19:41:26.098007 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:27.098991 kubelet[1745]: E0213 19:41:27.098936 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:28.099889 kubelet[1745]: E0213 19:41:28.099847 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:29.100373 kubelet[1745]: E0213 19:41:29.100329 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:29.129422 update_engine[1425]: I20250213 19:41:29.129308 1425 update_attempter.cc:509] Updating boot flags... Feb 13 19:41:29.160365 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3129) Feb 13 19:41:29.183384 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3129) Feb 13 19:41:30.101158 kubelet[1745]: E0213 19:41:30.101116 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:30.358484 kubelet[1745]: I0213 19:41:30.358358 1745 topology_manager.go:215] "Topology Admit Handler" podUID="31959cb7-9e76-448b-bd2e-7ceba2344443" podNamespace="default" podName="test-pod-1" Feb 13 19:41:30.364175 systemd[1]: Created slice kubepods-besteffort-pod31959cb7_9e76_448b_bd2e_7ceba2344443.slice - libcontainer container kubepods-besteffort-pod31959cb7_9e76_448b_bd2e_7ceba2344443.slice. Feb 13 19:41:30.420833 kubelet[1745]: I0213 19:41:30.420766 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-18dfa0ef-1d6b-4495-a87d-f123b05cc5a2\" (UniqueName: \"kubernetes.io/nfs/31959cb7-9e76-448b-bd2e-7ceba2344443-pvc-18dfa0ef-1d6b-4495-a87d-f123b05cc5a2\") pod \"test-pod-1\" (UID: \"31959cb7-9e76-448b-bd2e-7ceba2344443\") " pod="default/test-pod-1" Feb 13 19:41:30.420833 kubelet[1745]: I0213 19:41:30.420820 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb7q9\" (UniqueName: \"kubernetes.io/projected/31959cb7-9e76-448b-bd2e-7ceba2344443-kube-api-access-nb7q9\") pod \"test-pod-1\" (UID: \"31959cb7-9e76-448b-bd2e-7ceba2344443\") " pod="default/test-pod-1" Feb 13 19:41:30.539382 kernel: FS-Cache: Loaded Feb 13 19:41:30.562516 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:41:30.562593 kernel: RPC: Registered udp transport module. Feb 13 19:41:30.562615 kernel: RPC: Registered tcp transport module. Feb 13 19:41:30.563674 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:41:30.563744 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:41:30.739376 kernel: NFS: Registering the id_resolver key type Feb 13 19:41:30.739570 kernel: Key type id_resolver registered Feb 13 19:41:30.739606 kernel: Key type id_legacy registered Feb 13 19:41:30.760795 nfsidmap[3150]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:41:30.764144 nfsidmap[3153]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:41:30.967187 containerd[1433]: time="2025-02-13T19:41:30.967145692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:31959cb7-9e76-448b-bd2e-7ceba2344443,Namespace:default,Attempt:0,}" Feb 13 19:41:31.005169 systemd-networkd[1380]: lxc1229be64a778: Link UP Feb 13 19:41:31.007420 kernel: eth0: renamed from tmpeba05 Feb 13 19:41:31.014998 systemd-networkd[1380]: lxc1229be64a778: Gained carrier Feb 13 19:41:31.102183 kubelet[1745]: E0213 19:41:31.102134 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:31.201684 containerd[1433]: time="2025-02-13T19:41:31.201592888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:41:31.201684 containerd[1433]: time="2025-02-13T19:41:31.201641145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:41:31.201684 containerd[1433]: time="2025-02-13T19:41:31.201655229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:31.202406 containerd[1433]: time="2025-02-13T19:41:31.201727253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:31.218491 systemd[1]: Started cri-containerd-eba057e39f6c4f213215063db735aa5abb7d2d91ddce4d0a1c7c6670124568e6.scope - libcontainer container eba057e39f6c4f213215063db735aa5abb7d2d91ddce4d0a1c7c6670124568e6. Feb 13 19:41:31.228918 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:41:31.244204 containerd[1433]: time="2025-02-13T19:41:31.244170127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:31959cb7-9e76-448b-bd2e-7ceba2344443,Namespace:default,Attempt:0,} returns sandbox id \"eba057e39f6c4f213215063db735aa5abb7d2d91ddce4d0a1c7c6670124568e6\"" Feb 13 19:41:31.246174 containerd[1433]: time="2025-02-13T19:41:31.246146990Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:41:31.556591 containerd[1433]: time="2025-02-13T19:41:31.556539480Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:31.557362 containerd[1433]: time="2025-02-13T19:41:31.557303336Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:41:31.560274 containerd[1433]: time="2025-02-13T19:41:31.560241482Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 314.05832ms" Feb 13 19:41:31.560274 containerd[1433]: time="2025-02-13T19:41:31.560275413Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:41:31.567551 containerd[1433]: time="2025-02-13T19:41:31.567502277Z" level=info msg="CreateContainer within sandbox \"eba057e39f6c4f213215063db735aa5abb7d2d91ddce4d0a1c7c6670124568e6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:41:31.582572 containerd[1433]: time="2025-02-13T19:41:31.582529476Z" level=info msg="CreateContainer within sandbox \"eba057e39f6c4f213215063db735aa5abb7d2d91ddce4d0a1c7c6670124568e6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a5d3780b81164be8278679132e35b3299c276fe9d1c4eb5977cad24b59bb6e98\"" Feb 13 19:41:31.582953 containerd[1433]: time="2025-02-13T19:41:31.582931331Z" level=info msg="StartContainer for \"a5d3780b81164be8278679132e35b3299c276fe9d1c4eb5977cad24b59bb6e98\"" Feb 13 19:41:31.613475 systemd[1]: Started cri-containerd-a5d3780b81164be8278679132e35b3299c276fe9d1c4eb5977cad24b59bb6e98.scope - libcontainer container a5d3780b81164be8278679132e35b3299c276fe9d1c4eb5977cad24b59bb6e98. Feb 13 19:41:31.632572 containerd[1433]: time="2025-02-13T19:41:31.632525602Z" level=info msg="StartContainer for \"a5d3780b81164be8278679132e35b3299c276fe9d1c4eb5977cad24b59bb6e98\" returns successfully" Feb 13 19:41:32.102720 kubelet[1745]: E0213 19:41:32.102660 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:32.191642 systemd-networkd[1380]: lxc1229be64a778: Gained IPv6LL Feb 13 19:41:32.281416 kubelet[1745]: I0213 19:41:32.281343 1745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.965938007 podStartE2EDuration="16.281307486s" podCreationTimestamp="2025-02-13 19:41:16 +0000 UTC" firstStartedPulling="2025-02-13 19:41:31.245509096 +0000 UTC m=+38.821476224" lastFinishedPulling="2025-02-13 19:41:31.560878575 +0000 UTC m=+39.136845703" observedRunningTime="2025-02-13 19:41:32.281126868 +0000 UTC m=+39.857093996" watchObservedRunningTime="2025-02-13 19:41:32.281307486 +0000 UTC m=+39.857274574" Feb 13 19:41:33.076127 kubelet[1745]: E0213 19:41:33.076067 1745 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:33.103343 kubelet[1745]: E0213 19:41:33.103297 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:34.103569 kubelet[1745]: E0213 19:41:34.103530 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:34.835335 containerd[1433]: time="2025-02-13T19:41:34.835279247Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:41:34.840742 containerd[1433]: time="2025-02-13T19:41:34.840691982Z" level=info msg="StopContainer for \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\" with timeout 2 (s)" Feb 13 19:41:34.840960 containerd[1433]: time="2025-02-13T19:41:34.840927970Z" level=info msg="Stop container \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\" with signal terminated" Feb 13 19:41:34.846007 systemd-networkd[1380]: lxc_health: Link DOWN Feb 13 19:41:34.846015 systemd-networkd[1380]: lxc_health: Lost carrier Feb 13 19:41:34.873761 systemd[1]: cri-containerd-eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811.scope: Deactivated successfully. Feb 13 19:41:34.874186 systemd[1]: cri-containerd-eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811.scope: Consumed 6.367s CPU time. Feb 13 19:41:34.892736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811-rootfs.mount: Deactivated successfully. Feb 13 19:41:34.951624 containerd[1433]: time="2025-02-13T19:41:34.951564560Z" level=info msg="shim disconnected" id=eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811 namespace=k8s.io Feb 13 19:41:34.951624 containerd[1433]: time="2025-02-13T19:41:34.951619216Z" level=warning msg="cleaning up after shim disconnected" id=eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811 namespace=k8s.io Feb 13 19:41:34.951624 containerd[1433]: time="2025-02-13T19:41:34.951627419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:34.966343 containerd[1433]: time="2025-02-13T19:41:34.966283443Z" level=info msg="StopContainer for \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\" returns successfully" Feb 13 19:41:34.967054 containerd[1433]: time="2025-02-13T19:41:34.967027899Z" level=info msg="StopPodSandbox for \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\"" Feb 13 19:41:34.967105 containerd[1433]: time="2025-02-13T19:41:34.967071552Z" level=info msg="Container to stop \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:41:34.967105 containerd[1433]: time="2025-02-13T19:41:34.967086156Z" level=info msg="Container to stop \"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:41:34.967105 containerd[1433]: time="2025-02-13T19:41:34.967095959Z" level=info msg="Container to stop \"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:41:34.967214 containerd[1433]: time="2025-02-13T19:41:34.967104722Z" level=info msg="Container to stop \"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:41:34.967214 containerd[1433]: time="2025-02-13T19:41:34.967114124Z" level=info msg="Container to stop \"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:41:34.968725 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c-shm.mount: Deactivated successfully. Feb 13 19:41:34.974002 systemd[1]: cri-containerd-ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c.scope: Deactivated successfully. Feb 13 19:41:34.996975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c-rootfs.mount: Deactivated successfully. Feb 13 19:41:35.000199 containerd[1433]: time="2025-02-13T19:41:35.000084517Z" level=info msg="shim disconnected" id=ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c namespace=k8s.io Feb 13 19:41:35.000199 containerd[1433]: time="2025-02-13T19:41:35.000180785Z" level=warning msg="cleaning up after shim disconnected" id=ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c namespace=k8s.io Feb 13 19:41:35.000199 containerd[1433]: time="2025-02-13T19:41:35.000191588Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:35.010262 containerd[1433]: time="2025-02-13T19:41:35.010184221Z" level=info msg="TearDown network for sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" successfully" Feb 13 19:41:35.010262 containerd[1433]: time="2025-02-13T19:41:35.010221431Z" level=info msg="StopPodSandbox for \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" returns successfully" Feb 13 19:41:35.048165 kubelet[1745]: I0213 19:41:35.045811 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93ee06e7-102a-46b7-a9ad-200a00887cff-hubble-tls\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048165 kubelet[1745]: I0213 19:41:35.045851 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-run\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048165 kubelet[1745]: I0213 19:41:35.045872 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-lib-modules\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048165 kubelet[1745]: I0213 19:41:35.045888 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-bpf-maps\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048165 kubelet[1745]: I0213 19:41:35.045904 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-host-proc-sys-net\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048165 kubelet[1745]: I0213 19:41:35.045922 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-cgroup\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048452 kubelet[1745]: I0213 19:41:35.045939 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-host-proc-sys-kernel\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048452 kubelet[1745]: I0213 19:41:35.045955 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cni-path\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048452 kubelet[1745]: I0213 19:41:35.045976 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93ee06e7-102a-46b7-a9ad-200a00887cff-clustermesh-secrets\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048452 kubelet[1745]: I0213 19:41:35.045991 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-etc-cni-netd\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048452 kubelet[1745]: I0213 19:41:35.045948 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:41:35.048452 kubelet[1745]: I0213 19:41:35.046007 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x8mz\" (UniqueName: \"kubernetes.io/projected/93ee06e7-102a-46b7-a9ad-200a00887cff-kube-api-access-2x8mz\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048580 kubelet[1745]: I0213 19:41:35.046026 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-config-path\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048580 kubelet[1745]: I0213 19:41:35.046042 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-hostproc\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048580 kubelet[1745]: I0213 19:41:35.046057 1745 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-xtables-lock\") pod \"93ee06e7-102a-46b7-a9ad-200a00887cff\" (UID: \"93ee06e7-102a-46b7-a9ad-200a00887cff\") " Feb 13 19:41:35.048580 kubelet[1745]: I0213 19:41:35.046084 1745 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-lib-modules\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.048580 kubelet[1745]: I0213 19:41:35.046003 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:41:35.048580 kubelet[1745]: I0213 19:41:35.045970 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:41:35.048759 kubelet[1745]: I0213 19:41:35.045975 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:41:35.048759 kubelet[1745]: I0213 19:41:35.045984 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:41:35.048759 kubelet[1745]: I0213 19:41:35.045995 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:41:35.048759 kubelet[1745]: I0213 19:41:35.046022 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cni-path" (OuterVolumeSpecName: "cni-path") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:41:35.048759 kubelet[1745]: I0213 19:41:35.046041 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:41:35.048864 kubelet[1745]: I0213 19:41:35.046115 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:41:35.048864 kubelet[1745]: I0213 19:41:35.047381 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-hostproc" (OuterVolumeSpecName: "hostproc") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:41:35.048864 kubelet[1745]: I0213 19:41:35.048111 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:41:35.048864 kubelet[1745]: I0213 19:41:35.048787 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ee06e7-102a-46b7-a9ad-200a00887cff-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:41:35.048864 kubelet[1745]: I0213 19:41:35.048795 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ee06e7-102a-46b7-a9ad-200a00887cff-kube-api-access-2x8mz" (OuterVolumeSpecName: "kube-api-access-2x8mz") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "kube-api-access-2x8mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:41:35.048969 kubelet[1745]: I0213 19:41:35.048867 1745 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ee06e7-102a-46b7-a9ad-200a00887cff-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "93ee06e7-102a-46b7-a9ad-200a00887cff" (UID: "93ee06e7-102a-46b7-a9ad-200a00887cff"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:41:35.104064 kubelet[1745]: E0213 19:41:35.103930 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:35.146414 kubelet[1745]: I0213 19:41:35.146358 1745 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2x8mz\" (UniqueName: \"kubernetes.io/projected/93ee06e7-102a-46b7-a9ad-200a00887cff-kube-api-access-2x8mz\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146414 kubelet[1745]: I0213 19:41:35.146402 1745 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-config-path\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146414 kubelet[1745]: I0213 19:41:35.146411 1745 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-hostproc\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146414 kubelet[1745]: I0213 19:41:35.146420 1745 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-xtables-lock\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146414 kubelet[1745]: I0213 19:41:35.146429 1745 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-etc-cni-netd\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146633 kubelet[1745]: I0213 19:41:35.146438 1745 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-run\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146633 kubelet[1745]: I0213 19:41:35.146446 1745 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93ee06e7-102a-46b7-a9ad-200a00887cff-hubble-tls\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146633 kubelet[1745]: I0213 19:41:35.146454 1745 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-host-proc-sys-net\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146633 kubelet[1745]: I0213 19:41:35.146463 1745 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cilium-cgroup\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146633 kubelet[1745]: I0213 19:41:35.146470 1745 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-bpf-maps\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146633 kubelet[1745]: I0213 19:41:35.146478 1745 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-cni-path\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146633 kubelet[1745]: I0213 19:41:35.146486 1745 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93ee06e7-102a-46b7-a9ad-200a00887cff-clustermesh-secrets\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.146633 kubelet[1745]: I0213 19:41:35.146494 1745 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93ee06e7-102a-46b7-a9ad-200a00887cff-host-proc-sys-kernel\") on node \"10.0.0.72\" DevicePath \"\"" Feb 13 19:41:35.201420 systemd[1]: Removed slice kubepods-burstable-pod93ee06e7_102a_46b7_a9ad_200a00887cff.slice - libcontainer container kubepods-burstable-pod93ee06e7_102a_46b7_a9ad_200a00887cff.slice. Feb 13 19:41:35.201541 systemd[1]: kubepods-burstable-pod93ee06e7_102a_46b7_a9ad_200a00887cff.slice: Consumed 6.485s CPU time. Feb 13 19:41:35.281328 kubelet[1745]: I0213 19:41:35.281260 1745 scope.go:117] "RemoveContainer" containerID="eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811" Feb 13 19:41:35.283418 containerd[1433]: time="2025-02-13T19:41:35.283068992Z" level=info msg="RemoveContainer for \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\"" Feb 13 19:41:35.286908 containerd[1433]: time="2025-02-13T19:41:35.286870088Z" level=info msg="RemoveContainer for \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\" returns successfully" Feb 13 19:41:35.287341 kubelet[1745]: I0213 19:41:35.287119 1745 scope.go:117] "RemoveContainer" containerID="277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5" Feb 13 19:41:35.288243 containerd[1433]: time="2025-02-13T19:41:35.288065500Z" level=info msg="RemoveContainer for \"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5\"" Feb 13 19:41:35.290288 containerd[1433]: time="2025-02-13T19:41:35.290248467Z" level=info msg="RemoveContainer for \"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5\" returns successfully" Feb 13 19:41:35.290501 kubelet[1745]: I0213 19:41:35.290462 1745 scope.go:117] "RemoveContainer" containerID="6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be" Feb 13 19:41:35.291533 containerd[1433]: time="2025-02-13T19:41:35.291493333Z" level=info msg="RemoveContainer for \"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be\"" Feb 13 19:41:35.294413 containerd[1433]: time="2025-02-13T19:41:35.294370453Z" level=info msg="RemoveContainer for \"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be\" returns successfully" Feb 13 19:41:35.294600 kubelet[1745]: I0213 19:41:35.294575 1745 scope.go:117] "RemoveContainer" containerID="349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3" Feb 13 19:41:35.295795 containerd[1433]: time="2025-02-13T19:41:35.295719908Z" level=info msg="RemoveContainer for \"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3\"" Feb 13 19:41:35.302118 containerd[1433]: time="2025-02-13T19:41:35.302075515Z" level=info msg="RemoveContainer for \"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3\" returns successfully" Feb 13 19:41:35.302325 kubelet[1745]: I0213 19:41:35.302265 1745 scope.go:117] "RemoveContainer" containerID="9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e" Feb 13 19:41:35.303212 containerd[1433]: time="2025-02-13T19:41:35.303186904Z" level=info msg="RemoveContainer for \"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e\"" Feb 13 19:41:35.305327 containerd[1433]: time="2025-02-13T19:41:35.305272403Z" level=info msg="RemoveContainer for \"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e\" returns successfully" Feb 13 19:41:35.305476 kubelet[1745]: I0213 19:41:35.305450 1745 scope.go:117] "RemoveContainer" containerID="eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811" Feb 13 19:41:35.305654 containerd[1433]: time="2025-02-13T19:41:35.305622621Z" level=error msg="ContainerStatus for \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\": not found" Feb 13 19:41:35.305768 kubelet[1745]: E0213 19:41:35.305748 1745 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\": not found" containerID="eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811" Feb 13 19:41:35.305853 kubelet[1745]: I0213 19:41:35.305777 1745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811"} err="failed to get container status \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb62e572bc85b4de30312fb5fc116ab6cce63513da2da848acea63ce14e88811\": not found" Feb 13 19:41:35.305882 kubelet[1745]: I0213 19:41:35.305854 1745 scope.go:117] "RemoveContainer" containerID="277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5" Feb 13 19:41:35.306154 containerd[1433]: time="2025-02-13T19:41:35.306113757Z" level=error msg="ContainerStatus for \"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5\": not found" Feb 13 19:41:35.306341 kubelet[1745]: E0213 19:41:35.306276 1745 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5\": not found" containerID="277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5" Feb 13 19:41:35.306507 kubelet[1745]: I0213 19:41:35.306306 1745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5"} err="failed to get container status \"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"277e4577a950adf99a9ed51be608723a4d7de11819ff3a624141e8e05a8027c5\": not found" Feb 13 19:41:35.306507 kubelet[1745]: I0213 19:41:35.306425 1745 scope.go:117] "RemoveContainer" containerID="6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be" Feb 13 19:41:35.306624 containerd[1433]: time="2025-02-13T19:41:35.306594291Z" level=error msg="ContainerStatus for \"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be\": not found" Feb 13 19:41:35.306733 kubelet[1745]: E0213 19:41:35.306711 1745 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be\": not found" containerID="6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be" Feb 13 19:41:35.306768 kubelet[1745]: I0213 19:41:35.306739 1745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be"} err="failed to get container status \"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a06b49f4d63335e4548e5bad9e128d1eb54e5e7af6b43fd5eb84f87226b72be\": not found" Feb 13 19:41:35.306768 kubelet[1745]: I0213 19:41:35.306757 1745 scope.go:117] "RemoveContainer" containerID="349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3" Feb 13 19:41:35.306968 containerd[1433]: time="2025-02-13T19:41:35.306936826Z" level=error msg="ContainerStatus for \"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3\": not found" Feb 13 19:41:35.307052 kubelet[1745]: E0213 19:41:35.307033 1745 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3\": not found" containerID="349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3" Feb 13 19:41:35.307092 kubelet[1745]: I0213 19:41:35.307058 1745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3"} err="failed to get container status \"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"349f059a881b1a82120a66dc7207639d1fc824afb789d0cf5ed2e10558e3e8e3\": not found" Feb 13 19:41:35.307092 kubelet[1745]: I0213 19:41:35.307073 1745 scope.go:117] "RemoveContainer" containerID="9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e" Feb 13 19:41:35.307309 containerd[1433]: time="2025-02-13T19:41:35.307281042Z" level=error msg="ContainerStatus for \"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e\": not found" Feb 13 19:41:35.307496 kubelet[1745]: E0213 19:41:35.307475 1745 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e\": not found" containerID="9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e" Feb 13 19:41:35.307534 kubelet[1745]: I0213 19:41:35.307499 1745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e"} err="failed to get container status \"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e\": rpc error: code = NotFound desc = an error occurred when try to find container \"9fbc4a557de6223cde1e3991d24848deb36074cb811323fd26f6fddb7ceea30e\": not found" Feb 13 19:41:35.816348 systemd[1]: var-lib-kubelet-pods-93ee06e7\x2d102a\x2d46b7\x2da9ad\x2d200a00887cff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2x8mz.mount: Deactivated successfully. Feb 13 19:41:35.816455 systemd[1]: var-lib-kubelet-pods-93ee06e7\x2d102a\x2d46b7\x2da9ad\x2d200a00887cff-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:41:35.816517 systemd[1]: var-lib-kubelet-pods-93ee06e7\x2d102a\x2d46b7\x2da9ad\x2d200a00887cff-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:41:36.104779 kubelet[1745]: E0213 19:41:36.104665 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:37.104814 kubelet[1745]: E0213 19:41:37.104754 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:37.196958 kubelet[1745]: I0213 19:41:37.196931 1745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ee06e7-102a-46b7-a9ad-200a00887cff" path="/var/lib/kubelet/pods/93ee06e7-102a-46b7-a9ad-200a00887cff/volumes" Feb 13 19:41:37.580616 kubelet[1745]: I0213 19:41:37.580579 1745 topology_manager.go:215] "Topology Admit Handler" podUID="66964f07-6198-4cba-9ba2-206015e9cf17" podNamespace="kube-system" podName="cilium-operator-599987898-x6sft" Feb 13 19:41:37.580764 kubelet[1745]: E0213 19:41:37.580630 1745 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93ee06e7-102a-46b7-a9ad-200a00887cff" containerName="apply-sysctl-overwrites" Feb 13 19:41:37.580764 kubelet[1745]: E0213 19:41:37.580640 1745 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93ee06e7-102a-46b7-a9ad-200a00887cff" containerName="mount-bpf-fs" Feb 13 19:41:37.580764 kubelet[1745]: E0213 19:41:37.580646 1745 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93ee06e7-102a-46b7-a9ad-200a00887cff" containerName="clean-cilium-state" Feb 13 19:41:37.580764 kubelet[1745]: E0213 19:41:37.580652 1745 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93ee06e7-102a-46b7-a9ad-200a00887cff" containerName="cilium-agent" Feb 13 19:41:37.580764 kubelet[1745]: E0213 19:41:37.580660 1745 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93ee06e7-102a-46b7-a9ad-200a00887cff" containerName="mount-cgroup" Feb 13 19:41:37.580764 kubelet[1745]: I0213 19:41:37.580678 1745 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ee06e7-102a-46b7-a9ad-200a00887cff" containerName="cilium-agent" Feb 13 19:41:37.582756 kubelet[1745]: W0213 19:41:37.582694 1745 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.72" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.72' and this object Feb 13 19:41:37.582756 kubelet[1745]: E0213 19:41:37.582724 1745 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.72" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.72' and this object Feb 13 19:41:37.585518 systemd[1]: Created slice kubepods-besteffort-pod66964f07_6198_4cba_9ba2_206015e9cf17.slice - libcontainer container kubepods-besteffort-pod66964f07_6198_4cba_9ba2_206015e9cf17.slice. Feb 13 19:41:37.590348 kubelet[1745]: I0213 19:41:37.590303 1745 topology_manager.go:215] "Topology Admit Handler" podUID="dda4a7e8-0d30-4787-859c-2d2507c8ec01" podNamespace="kube-system" podName="cilium-vz4st" Feb 13 19:41:37.594855 systemd[1]: Created slice kubepods-burstable-poddda4a7e8_0d30_4787_859c_2d2507c8ec01.slice - libcontainer container kubepods-burstable-poddda4a7e8_0d30_4787_859c_2d2507c8ec01.slice. Feb 13 19:41:37.659391 kubelet[1745]: I0213 19:41:37.659341 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dda4a7e8-0d30-4787-859c-2d2507c8ec01-cilium-cgroup\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659537 kubelet[1745]: I0213 19:41:37.659409 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dda4a7e8-0d30-4787-859c-2d2507c8ec01-cilium-ipsec-secrets\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659537 kubelet[1745]: I0213 19:41:37.659435 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dda4a7e8-0d30-4787-859c-2d2507c8ec01-host-proc-sys-net\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659537 kubelet[1745]: I0213 19:41:37.659450 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dda4a7e8-0d30-4787-859c-2d2507c8ec01-hubble-tls\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659537 kubelet[1745]: I0213 19:41:37.659467 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72vzl\" (UniqueName: \"kubernetes.io/projected/dda4a7e8-0d30-4787-859c-2d2507c8ec01-kube-api-access-72vzl\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659537 kubelet[1745]: I0213 19:41:37.659483 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dda4a7e8-0d30-4787-859c-2d2507c8ec01-etc-cni-netd\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659537 kubelet[1745]: I0213 19:41:37.659497 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dda4a7e8-0d30-4787-859c-2d2507c8ec01-lib-modules\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659666 kubelet[1745]: I0213 19:41:37.659536 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dda4a7e8-0d30-4787-859c-2d2507c8ec01-xtables-lock\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659666 kubelet[1745]: I0213 19:41:37.659583 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dda4a7e8-0d30-4787-859c-2d2507c8ec01-cilium-config-path\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659666 kubelet[1745]: I0213 19:41:37.659602 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mwvl\" (UniqueName: \"kubernetes.io/projected/66964f07-6198-4cba-9ba2-206015e9cf17-kube-api-access-4mwvl\") pod \"cilium-operator-599987898-x6sft\" (UID: \"66964f07-6198-4cba-9ba2-206015e9cf17\") " pod="kube-system/cilium-operator-599987898-x6sft" Feb 13 19:41:37.659666 kubelet[1745]: I0213 19:41:37.659634 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dda4a7e8-0d30-4787-859c-2d2507c8ec01-hostproc\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659666 kubelet[1745]: I0213 19:41:37.659658 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66964f07-6198-4cba-9ba2-206015e9cf17-cilium-config-path\") pod \"cilium-operator-599987898-x6sft\" (UID: \"66964f07-6198-4cba-9ba2-206015e9cf17\") " pod="kube-system/cilium-operator-599987898-x6sft" Feb 13 19:41:37.659777 kubelet[1745]: I0213 19:41:37.659685 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dda4a7e8-0d30-4787-859c-2d2507c8ec01-host-proc-sys-kernel\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659777 kubelet[1745]: I0213 19:41:37.659704 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dda4a7e8-0d30-4787-859c-2d2507c8ec01-cilium-run\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659777 kubelet[1745]: I0213 19:41:37.659719 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dda4a7e8-0d30-4787-859c-2d2507c8ec01-bpf-maps\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659777 kubelet[1745]: I0213 19:41:37.659740 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dda4a7e8-0d30-4787-859c-2d2507c8ec01-cni-path\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:37.659777 kubelet[1745]: I0213 19:41:37.659755 1745 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dda4a7e8-0d30-4787-859c-2d2507c8ec01-clustermesh-secrets\") pod \"cilium-vz4st\" (UID: \"dda4a7e8-0d30-4787-859c-2d2507c8ec01\") " pod="kube-system/cilium-vz4st" Feb 13 19:41:38.105258 kubelet[1745]: E0213 19:41:38.105199 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:38.204609 kubelet[1745]: E0213 19:41:38.204562 1745 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:41:38.762048 kubelet[1745]: E0213 19:41:38.761987 1745 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:41:38.762221 kubelet[1745]: E0213 19:41:38.762083 1745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dda4a7e8-0d30-4787-859c-2d2507c8ec01-cilium-config-path podName:dda4a7e8-0d30-4787-859c-2d2507c8ec01 nodeName:}" failed. No retries permitted until 2025-02-13 19:41:39.262060808 +0000 UTC m=+46.838027936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/dda4a7e8-0d30-4787-859c-2d2507c8ec01-cilium-config-path") pod "cilium-vz4st" (UID: "dda4a7e8-0d30-4787-859c-2d2507c8ec01") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:41:38.762221 kubelet[1745]: E0213 19:41:38.761998 1745 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:41:38.762221 kubelet[1745]: E0213 19:41:38.762156 1745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66964f07-6198-4cba-9ba2-206015e9cf17-cilium-config-path podName:66964f07-6198-4cba-9ba2-206015e9cf17 nodeName:}" failed. No retries permitted until 2025-02-13 19:41:39.262141108 +0000 UTC m=+46.838108196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/66964f07-6198-4cba-9ba2-206015e9cf17-cilium-config-path") pod "cilium-operator-599987898-x6sft" (UID: "66964f07-6198-4cba-9ba2-206015e9cf17") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:41:39.106096 kubelet[1745]: E0213 19:41:39.105990 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:39.387344 kubelet[1745]: E0213 19:41:39.387212 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:39.387728 containerd[1433]: time="2025-02-13T19:41:39.387685478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-x6sft,Uid:66964f07-6198-4cba-9ba2-206015e9cf17,Namespace:kube-system,Attempt:0,}" Feb 13 19:41:39.403714 containerd[1433]: time="2025-02-13T19:41:39.403538581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:41:39.403714 containerd[1433]: time="2025-02-13T19:41:39.403594475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:41:39.403714 containerd[1433]: time="2025-02-13T19:41:39.403605477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:39.403714 containerd[1433]: time="2025-02-13T19:41:39.403676254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:39.407360 kubelet[1745]: E0213 19:41:39.407271 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:39.407759 containerd[1433]: time="2025-02-13T19:41:39.407717598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vz4st,Uid:dda4a7e8-0d30-4787-859c-2d2507c8ec01,Namespace:kube-system,Attempt:0,}" Feb 13 19:41:39.418444 systemd[1]: run-containerd-runc-k8s.io-ee1b20a7dfdd4a0d9b7739ebcbc3c5429456318f73e630acd4d0ce92e16eafc7-runc.n2NY30.mount: Deactivated successfully. Feb 13 19:41:39.425500 containerd[1433]: time="2025-02-13T19:41:39.425401049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:41:39.425500 containerd[1433]: time="2025-02-13T19:41:39.425454502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:41:39.425500 containerd[1433]: time="2025-02-13T19:41:39.425476467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:39.425589 systemd[1]: Started cri-containerd-ee1b20a7dfdd4a0d9b7739ebcbc3c5429456318f73e630acd4d0ce92e16eafc7.scope - libcontainer container ee1b20a7dfdd4a0d9b7739ebcbc3c5429456318f73e630acd4d0ce92e16eafc7. Feb 13 19:41:39.425832 containerd[1433]: time="2025-02-13T19:41:39.425614979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:39.446499 systemd[1]: Started cri-containerd-7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d.scope - libcontainer container 7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d. Feb 13 19:41:39.455249 containerd[1433]: time="2025-02-13T19:41:39.455175165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-x6sft,Uid:66964f07-6198-4cba-9ba2-206015e9cf17,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee1b20a7dfdd4a0d9b7739ebcbc3c5429456318f73e630acd4d0ce92e16eafc7\"" Feb 13 19:41:39.455950 kubelet[1745]: E0213 19:41:39.455910 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:39.456850 containerd[1433]: time="2025-02-13T19:41:39.456814948Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:41:39.466255 containerd[1433]: time="2025-02-13T19:41:39.466223946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vz4st,Uid:dda4a7e8-0d30-4787-859c-2d2507c8ec01,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d\"" Feb 13 19:41:39.466938 kubelet[1745]: E0213 19:41:39.466913 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:39.468888 containerd[1433]: time="2025-02-13T19:41:39.468858122Z" level=info msg="CreateContainer within sandbox \"7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:41:39.478084 containerd[1433]: time="2025-02-13T19:41:39.478037266Z" level=info msg="CreateContainer within sandbox \"7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c8c0d7dea3e14cd73e8841a49fbfca0463ff9fa0f2aef32816ef89ea1f9e09a\"" Feb 13 19:41:39.478529 containerd[1433]: time="2025-02-13T19:41:39.478471367Z" level=info msg="StartContainer for \"9c8c0d7dea3e14cd73e8841a49fbfca0463ff9fa0f2aef32816ef89ea1f9e09a\"" Feb 13 19:41:39.508459 systemd[1]: Started cri-containerd-9c8c0d7dea3e14cd73e8841a49fbfca0463ff9fa0f2aef32816ef89ea1f9e09a.scope - libcontainer container 9c8c0d7dea3e14cd73e8841a49fbfca0463ff9fa0f2aef32816ef89ea1f9e09a. Feb 13 19:41:39.526534 containerd[1433]: time="2025-02-13T19:41:39.526390883Z" level=info msg="StartContainer for \"9c8c0d7dea3e14cd73e8841a49fbfca0463ff9fa0f2aef32816ef89ea1f9e09a\" returns successfully" Feb 13 19:41:39.606712 systemd[1]: cri-containerd-9c8c0d7dea3e14cd73e8841a49fbfca0463ff9fa0f2aef32816ef89ea1f9e09a.scope: Deactivated successfully. Feb 13 19:41:39.646610 containerd[1433]: time="2025-02-13T19:41:39.646399639Z" level=info msg="shim disconnected" id=9c8c0d7dea3e14cd73e8841a49fbfca0463ff9fa0f2aef32816ef89ea1f9e09a namespace=k8s.io Feb 13 19:41:39.646610 containerd[1433]: time="2025-02-13T19:41:39.646463174Z" level=warning msg="cleaning up after shim disconnected" id=9c8c0d7dea3e14cd73e8841a49fbfca0463ff9fa0f2aef32816ef89ea1f9e09a namespace=k8s.io Feb 13 19:41:39.646610 containerd[1433]: time="2025-02-13T19:41:39.646472656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:40.107154 kubelet[1745]: E0213 19:41:40.107113 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:40.299804 kubelet[1745]: E0213 19:41:40.299773 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:40.302840 containerd[1433]: time="2025-02-13T19:41:40.302597039Z" level=info msg="CreateContainer within sandbox \"7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:41:40.314557 containerd[1433]: time="2025-02-13T19:41:40.314379321Z" level=info msg="CreateContainer within sandbox \"7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"68406fc2da830ee5ddd3d1d19c90865ebdfd91a2d82ecbcf2439d64e6a5c30d1\"" Feb 13 19:41:40.315419 containerd[1433]: time="2025-02-13T19:41:40.314911880Z" level=info msg="StartContainer for \"68406fc2da830ee5ddd3d1d19c90865ebdfd91a2d82ecbcf2439d64e6a5c30d1\"" Feb 13 19:41:40.341467 systemd[1]: Started cri-containerd-68406fc2da830ee5ddd3d1d19c90865ebdfd91a2d82ecbcf2439d64e6a5c30d1.scope - libcontainer container 68406fc2da830ee5ddd3d1d19c90865ebdfd91a2d82ecbcf2439d64e6a5c30d1. Feb 13 19:41:40.364230 containerd[1433]: time="2025-02-13T19:41:40.363549305Z" level=info msg="StartContainer for \"68406fc2da830ee5ddd3d1d19c90865ebdfd91a2d82ecbcf2439d64e6a5c30d1\" returns successfully" Feb 13 19:41:40.375628 systemd[1]: cri-containerd-68406fc2da830ee5ddd3d1d19c90865ebdfd91a2d82ecbcf2439d64e6a5c30d1.scope: Deactivated successfully. Feb 13 19:41:40.395302 containerd[1433]: time="2025-02-13T19:41:40.394957188Z" level=info msg="shim disconnected" id=68406fc2da830ee5ddd3d1d19c90865ebdfd91a2d82ecbcf2439d64e6a5c30d1 namespace=k8s.io Feb 13 19:41:40.395302 containerd[1433]: time="2025-02-13T19:41:40.395009959Z" level=warning msg="cleaning up after shim disconnected" id=68406fc2da830ee5ddd3d1d19c90865ebdfd91a2d82ecbcf2439d64e6a5c30d1 namespace=k8s.io Feb 13 19:41:40.395302 containerd[1433]: time="2025-02-13T19:41:40.395018641Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:40.425660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518139932.mount: Deactivated successfully. Feb 13 19:41:40.657334 containerd[1433]: time="2025-02-13T19:41:40.657211348Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:40.658552 containerd[1433]: time="2025-02-13T19:41:40.658443105Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:41:40.659373 containerd[1433]: time="2025-02-13T19:41:40.659343987Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:40.660711 containerd[1433]: time="2025-02-13T19:41:40.660677806Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.203826729s" Feb 13 19:41:40.660711 containerd[1433]: time="2025-02-13T19:41:40.660714614Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:41:40.663091 containerd[1433]: time="2025-02-13T19:41:40.663026372Z" level=info msg="CreateContainer within sandbox \"ee1b20a7dfdd4a0d9b7739ebcbc3c5429456318f73e630acd4d0ce92e16eafc7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:41:40.673754 containerd[1433]: time="2025-02-13T19:41:40.673673480Z" level=info msg="CreateContainer within sandbox \"ee1b20a7dfdd4a0d9b7739ebcbc3c5429456318f73e630acd4d0ce92e16eafc7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"80357c4c01e2b928f599d4eff2fb093d5340bfa0e67e3185bbbc5c905ab497ff\"" Feb 13 19:41:40.674148 containerd[1433]: time="2025-02-13T19:41:40.674119860Z" level=info msg="StartContainer for \"80357c4c01e2b928f599d4eff2fb093d5340bfa0e67e3185bbbc5c905ab497ff\"" Feb 13 19:41:40.703601 systemd[1]: Started cri-containerd-80357c4c01e2b928f599d4eff2fb093d5340bfa0e67e3185bbbc5c905ab497ff.scope - libcontainer container 80357c4c01e2b928f599d4eff2fb093d5340bfa0e67e3185bbbc5c905ab497ff. Feb 13 19:41:40.767376 containerd[1433]: time="2025-02-13T19:41:40.767312115Z" level=info msg="StartContainer for \"80357c4c01e2b928f599d4eff2fb093d5340bfa0e67e3185bbbc5c905ab497ff\" returns successfully" Feb 13 19:41:41.107619 kubelet[1745]: E0213 19:41:41.107570 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:41.305165 kubelet[1745]: E0213 19:41:41.305129 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:41.307021 kubelet[1745]: E0213 19:41:41.306973 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:41.307311 containerd[1433]: time="2025-02-13T19:41:41.307242710Z" level=info msg="CreateContainer within sandbox \"7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:41:41.329960 containerd[1433]: time="2025-02-13T19:41:41.329891788Z" level=info msg="CreateContainer within sandbox \"7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2b982d05ff1e74e31a9f4becbb213e70b08137c31e19c506a1c18cd4b4ce9df5\"" Feb 13 19:41:41.330525 containerd[1433]: time="2025-02-13T19:41:41.330499519Z" level=info msg="StartContainer for \"2b982d05ff1e74e31a9f4becbb213e70b08137c31e19c506a1c18cd4b4ce9df5\"" Feb 13 19:41:41.339518 kubelet[1745]: I0213 19:41:41.339279 1745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-x6sft" podStartSLOduration=3.13425181 podStartE2EDuration="4.339265567s" podCreationTimestamp="2025-02-13 19:41:37 +0000 UTC" firstStartedPulling="2025-02-13 19:41:39.456594737 +0000 UTC m=+47.032561865" lastFinishedPulling="2025-02-13 19:41:40.661608494 +0000 UTC m=+48.237575622" observedRunningTime="2025-02-13 19:41:41.338875443 +0000 UTC m=+48.914842571" watchObservedRunningTime="2025-02-13 19:41:41.339265567 +0000 UTC m=+48.915232695" Feb 13 19:41:41.359503 systemd[1]: Started cri-containerd-2b982d05ff1e74e31a9f4becbb213e70b08137c31e19c506a1c18cd4b4ce9df5.scope - libcontainer container 2b982d05ff1e74e31a9f4becbb213e70b08137c31e19c506a1c18cd4b4ce9df5. Feb 13 19:41:41.381962 containerd[1433]: time="2025-02-13T19:41:41.381908472Z" level=info msg="StartContainer for \"2b982d05ff1e74e31a9f4becbb213e70b08137c31e19c506a1c18cd4b4ce9df5\" returns successfully" Feb 13 19:41:41.383137 systemd[1]: cri-containerd-2b982d05ff1e74e31a9f4becbb213e70b08137c31e19c506a1c18cd4b4ce9df5.scope: Deactivated successfully. Feb 13 19:41:41.403040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b982d05ff1e74e31a9f4becbb213e70b08137c31e19c506a1c18cd4b4ce9df5-rootfs.mount: Deactivated successfully. Feb 13 19:41:41.407580 containerd[1433]: time="2025-02-13T19:41:41.407525110Z" level=info msg="shim disconnected" id=2b982d05ff1e74e31a9f4becbb213e70b08137c31e19c506a1c18cd4b4ce9df5 namespace=k8s.io Feb 13 19:41:41.407580 containerd[1433]: time="2025-02-13T19:41:41.407575601Z" level=warning msg="cleaning up after shim disconnected" id=2b982d05ff1e74e31a9f4becbb213e70b08137c31e19c506a1c18cd4b4ce9df5 namespace=k8s.io Feb 13 19:41:41.407580 containerd[1433]: time="2025-02-13T19:41:41.407584403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:42.108211 kubelet[1745]: E0213 19:41:42.108160 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:42.310809 kubelet[1745]: E0213 19:41:42.310656 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:42.310809 kubelet[1745]: E0213 19:41:42.310699 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:42.312479 containerd[1433]: time="2025-02-13T19:41:42.312437329Z" level=info msg="CreateContainer within sandbox \"7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:41:42.322280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4123645389.mount: Deactivated successfully. Feb 13 19:41:42.323125 containerd[1433]: time="2025-02-13T19:41:42.323083294Z" level=info msg="CreateContainer within sandbox \"7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ab02ef95004dadd67d532d121232026eb672701908d10d1ffe365f0907f8fba\"" Feb 13 19:41:42.323660 containerd[1433]: time="2025-02-13T19:41:42.323627607Z" level=info msg="StartContainer for \"1ab02ef95004dadd67d532d121232026eb672701908d10d1ffe365f0907f8fba\"" Feb 13 19:41:42.349464 systemd[1]: Started cri-containerd-1ab02ef95004dadd67d532d121232026eb672701908d10d1ffe365f0907f8fba.scope - libcontainer container 1ab02ef95004dadd67d532d121232026eb672701908d10d1ffe365f0907f8fba. Feb 13 19:41:42.366410 systemd[1]: cri-containerd-1ab02ef95004dadd67d532d121232026eb672701908d10d1ffe365f0907f8fba.scope: Deactivated successfully. Feb 13 19:41:42.368453 containerd[1433]: time="2025-02-13T19:41:42.368290618Z" level=info msg="StartContainer for \"1ab02ef95004dadd67d532d121232026eb672701908d10d1ffe365f0907f8fba\" returns successfully" Feb 13 19:41:42.387249 containerd[1433]: time="2025-02-13T19:41:42.387193653Z" level=info msg="shim disconnected" id=1ab02ef95004dadd67d532d121232026eb672701908d10d1ffe365f0907f8fba namespace=k8s.io Feb 13 19:41:42.387249 containerd[1433]: time="2025-02-13T19:41:42.387241943Z" level=warning msg="cleaning up after shim disconnected" id=1ab02ef95004dadd67d532d121232026eb672701908d10d1ffe365f0907f8fba namespace=k8s.io Feb 13 19:41:42.387249 containerd[1433]: time="2025-02-13T19:41:42.387251545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:42.394798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ab02ef95004dadd67d532d121232026eb672701908d10d1ffe365f0907f8fba-rootfs.mount: Deactivated successfully. Feb 13 19:41:43.108891 kubelet[1745]: E0213 19:41:43.108847 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:43.206092 kubelet[1745]: E0213 19:41:43.206015 1745 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:41:43.314999 kubelet[1745]: E0213 19:41:43.314890 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:43.316905 containerd[1433]: time="2025-02-13T19:41:43.316786660Z" level=info msg="CreateContainer within sandbox \"7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:41:43.333520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914090351.mount: Deactivated successfully. Feb 13 19:41:43.343908 containerd[1433]: time="2025-02-13T19:41:43.343856777Z" level=info msg="CreateContainer within sandbox \"7dc326de54d6b14f54083e02bfbaf7fcfe81ed41c0c857af8ad4b28bd2cebb8d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bbc50e48cf14fbc846657a91f71936327f77b1a4e8960624efce2e2afbefa1d2\"" Feb 13 19:41:43.344523 containerd[1433]: time="2025-02-13T19:41:43.344495624Z" level=info msg="StartContainer for \"bbc50e48cf14fbc846657a91f71936327f77b1a4e8960624efce2e2afbefa1d2\"" Feb 13 19:41:43.369493 systemd[1]: Started cri-containerd-bbc50e48cf14fbc846657a91f71936327f77b1a4e8960624efce2e2afbefa1d2.scope - libcontainer container bbc50e48cf14fbc846657a91f71936327f77b1a4e8960624efce2e2afbefa1d2. Feb 13 19:41:43.396710 containerd[1433]: time="2025-02-13T19:41:43.395599653Z" level=info msg="StartContainer for \"bbc50e48cf14fbc846657a91f71936327f77b1a4e8960624efce2e2afbefa1d2\" returns successfully" Feb 13 19:41:43.647345 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:41:44.109577 kubelet[1745]: E0213 19:41:44.109533 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:44.320107 kubelet[1745]: E0213 19:41:44.319824 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:44.334580 kubelet[1745]: I0213 19:41:44.334518 1745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vz4st" podStartSLOduration=7.33450322 podStartE2EDuration="7.33450322s" podCreationTimestamp="2025-02-13 19:41:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:41:44.333993002 +0000 UTC m=+51.909960130" watchObservedRunningTime="2025-02-13 19:41:44.33450322 +0000 UTC m=+51.910470348" Feb 13 19:41:44.553125 kubelet[1745]: I0213 19:41:44.553082 1745 setters.go:580] "Node became not ready" node="10.0.0.72" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:41:44Z","lastTransitionTime":"2025-02-13T19:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:41:45.110456 kubelet[1745]: E0213 19:41:45.110392 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:45.410023 kubelet[1745]: E0213 19:41:45.409912 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:46.111041 kubelet[1745]: E0213 19:41:46.110998 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:46.367472 systemd-networkd[1380]: lxc_health: Link UP Feb 13 19:41:46.380356 systemd-networkd[1380]: lxc_health: Gained carrier Feb 13 19:41:47.112019 kubelet[1745]: E0213 19:41:47.111978 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:47.410366 kubelet[1745]: E0213 19:41:47.409844 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:48.112270 kubelet[1745]: E0213 19:41:48.112229 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:48.191640 systemd-networkd[1380]: lxc_health: Gained IPv6LL Feb 13 19:41:48.327021 kubelet[1745]: E0213 19:41:48.326925 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:49.113343 kubelet[1745]: E0213 19:41:49.113274 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:49.328831 kubelet[1745]: E0213 19:41:49.328569 1745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:50.113737 kubelet[1745]: E0213 19:41:50.113698 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:51.115006 kubelet[1745]: E0213 19:41:51.114961 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:52.115621 kubelet[1745]: E0213 19:41:52.115583 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:53.075581 kubelet[1745]: E0213 19:41:53.075533 1745 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:53.086969 containerd[1433]: time="2025-02-13T19:41:53.086863799Z" level=info msg="StopPodSandbox for \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\"" Feb 13 19:41:53.086969 containerd[1433]: time="2025-02-13T19:41:53.086947211Z" level=info msg="TearDown network for sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" successfully" Feb 13 19:41:53.086969 containerd[1433]: time="2025-02-13T19:41:53.086957213Z" level=info msg="StopPodSandbox for \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" returns successfully" Feb 13 19:41:53.088512 containerd[1433]: time="2025-02-13T19:41:53.087489809Z" level=info msg="RemovePodSandbox for \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\"" Feb 13 19:41:53.088512 containerd[1433]: time="2025-02-13T19:41:53.087522494Z" level=info msg="Forcibly stopping sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\"" Feb 13 19:41:53.088512 containerd[1433]: time="2025-02-13T19:41:53.087572261Z" level=info msg="TearDown network for sandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" successfully" Feb 13 19:41:53.090242 containerd[1433]: time="2025-02-13T19:41:53.090167515Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:41:53.090242 containerd[1433]: time="2025-02-13T19:41:53.090215962Z" level=info msg="RemovePodSandbox \"ee94f43f61d78aeeaf0a54522d46ae99dd5c628d371b0e039b4947c246c0009c\" returns successfully" Feb 13 19:41:53.116244 kubelet[1745]: E0213 19:41:53.116208 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:41:54.117254 kubelet[1745]: E0213 19:41:54.117204 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"