Feb 13 19:03:10.947101 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:03:10.947125 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:03:10.947136 kernel: KASLR enabled Feb 13 19:03:10.947142 kernel: efi: EFI v2.7 by EDK II Feb 13 19:03:10.947157 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 19:03:10.947164 kernel: random: crng init done Feb 13 19:03:10.947170 kernel: secureboot: Secure boot disabled Feb 13 19:03:10.947177 kernel: ACPI: Early table checksum verification disabled Feb 13 19:03:10.947183 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:03:10.947191 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:03:10.947197 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:03:10.947203 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:03:10.947209 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:03:10.947215 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:03:10.947223 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:03:10.947230 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:03:10.947237 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:03:10.947243 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:03:10.947250 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:03:10.947256 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:03:10.947262 kernel: NUMA: Failed to initialise from firmware Feb 13 19:03:10.947268 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:03:10.947275 kernel: NUMA: NODE_DATA [mem 0xdc956800-0xdc95bfff] Feb 13 19:03:10.947281 kernel: Zone ranges: Feb 13 19:03:10.947287 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:03:10.947295 kernel: DMA32 empty Feb 13 19:03:10.947301 kernel: Normal empty Feb 13 19:03:10.947307 kernel: Movable zone start for each node Feb 13 19:03:10.947313 kernel: Early memory node ranges Feb 13 19:03:10.947319 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 19:03:10.947326 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 19:03:10.947332 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 19:03:10.947338 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:03:10.947344 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:03:10.947350 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:03:10.947357 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:03:10.947363 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:03:10.947371 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:03:10.947377 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:03:10.947384 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:03:10.947393 kernel: psci: probing for conduit method from ACPI. Feb 13 19:03:10.947399 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:03:10.947406 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:03:10.947414 kernel: psci: Trusted OS migration not required Feb 13 19:03:10.947421 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:03:10.947428 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:03:10.947435 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:03:10.947441 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:03:10.947448 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:03:10.947455 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:03:10.947462 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:03:10.947468 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:03:10.947475 kernel: CPU features: detected: Spectre-v4 Feb 13 19:03:10.947483 kernel: CPU features: detected: Spectre-BHB Feb 13 19:03:10.947490 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:03:10.947497 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:03:10.947504 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:03:10.947510 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:03:10.947517 kernel: alternatives: applying boot alternatives Feb 13 19:03:10.947525 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:03:10.947532 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:03:10.947538 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:03:10.947545 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:03:10.947552 kernel: Fallback order for Node 0: 0 Feb 13 19:03:10.947561 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:03:10.947567 kernel: Policy zone: DMA Feb 13 19:03:10.947574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:03:10.947580 kernel: software IO TLB: area num 4. Feb 13 19:03:10.947587 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:03:10.947594 kernel: Memory: 2387532K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184756K reserved, 0K cma-reserved) Feb 13 19:03:10.947601 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:03:10.947608 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:03:10.947615 kernel: rcu: RCU event tracing is enabled. Feb 13 19:03:10.947622 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:03:10.947630 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:03:10.947638 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:03:10.947647 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:03:10.947654 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:03:10.947662 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:03:10.947668 kernel: GICv3: 256 SPIs implemented Feb 13 19:03:10.947675 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:03:10.947681 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:03:10.947688 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:03:10.947695 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:03:10.947701 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:03:10.947708 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:03:10.947715 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:03:10.947723 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:03:10.947730 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:03:10.947738 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:03:10.947745 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:03:10.947752 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:03:10.947760 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:03:10.947767 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:03:10.947775 kernel: arm-pv: using stolen time PV Feb 13 19:03:10.947783 kernel: Console: colour dummy device 80x25 Feb 13 19:03:10.947791 kernel: ACPI: Core revision 20230628 Feb 13 19:03:10.947799 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:03:10.947812 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:03:10.947824 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:03:10.947833 kernel: landlock: Up and running. Feb 13 19:03:10.947841 kernel: SELinux: Initializing. Feb 13 19:03:10.947849 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:03:10.947856 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:03:10.947865 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:03:10.947873 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:03:10.947881 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:03:10.947891 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:03:10.947898 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:03:10.947916 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:03:10.947923 kernel: Remapping and enabling EFI services. Feb 13 19:03:10.947930 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:03:10.947937 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:03:10.947944 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:03:10.947951 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:03:10.947959 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:03:10.947968 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:03:10.947976 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:03:10.947988 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:03:10.947997 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:03:10.948004 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:03:10.948011 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:03:10.948018 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:03:10.948026 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:03:10.948033 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:03:10.948042 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:03:10.948049 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:03:10.948056 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:03:10.948063 kernel: SMP: Total of 4 processors activated. Feb 13 19:03:10.948070 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:03:10.948077 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:03:10.948089 kernel: CPU features: detected: Common not Private translations Feb 13 19:03:10.948099 kernel: CPU features: detected: CRC32 instructions Feb 13 19:03:10.948108 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:03:10.948116 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:03:10.948125 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:03:10.948132 kernel: CPU features: detected: Privileged Access Never Feb 13 19:03:10.948140 kernel: CPU features: detected: RAS Extension Support Feb 13 19:03:10.948152 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:03:10.948160 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:03:10.948167 kernel: alternatives: applying system-wide alternatives Feb 13 19:03:10.948175 kernel: devtmpfs: initialized Feb 13 19:03:10.948182 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:03:10.948193 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:03:10.948200 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:03:10.948207 kernel: SMBIOS 3.0.0 present. Feb 13 19:03:10.948214 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:03:10.948221 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:03:10.948228 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:03:10.948235 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:03:10.948243 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:03:10.948250 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:03:10.948258 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 19:03:10.948266 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:03:10.948273 kernel: cpuidle: using governor menu Feb 13 19:03:10.948280 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:03:10.948287 kernel: ASID allocator initialised with 32768 entries Feb 13 19:03:10.948294 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:03:10.948301 kernel: Serial: AMBA PL011 UART driver Feb 13 19:03:10.948309 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:03:10.948316 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:03:10.948324 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:03:10.948332 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:03:10.948339 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:03:10.948346 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:03:10.948353 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:03:10.948360 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:03:10.948367 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:03:10.948374 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:03:10.948382 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:03:10.948390 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:03:10.948397 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:03:10.948404 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:03:10.948411 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:03:10.948418 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:03:10.948425 kernel: ACPI: Interpreter enabled Feb 13 19:03:10.948432 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:03:10.948439 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:03:10.948447 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:03:10.948456 kernel: printk: console [ttyAMA0] enabled Feb 13 19:03:10.948463 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:03:10.948621 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:03:10.948714 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:03:10.948786 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:03:10.948851 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:03:10.948939 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:03:10.948952 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:03:10.948959 kernel: PCI host bridge to bus 0000:00 Feb 13 19:03:10.949032 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:03:10.949097 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:03:10.949163 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:03:10.949220 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:03:10.949303 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:03:10.949382 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:03:10.949451 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:03:10.949527 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:03:10.949594 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:03:10.949671 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:03:10.949739 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:03:10.949808 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:03:10.949871 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:03:10.949942 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:03:10.950003 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:03:10.950012 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:03:10.950020 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:03:10.950027 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:03:10.950034 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:03:10.950042 kernel: iommu: Default domain type: Translated Feb 13 19:03:10.950051 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:03:10.950059 kernel: efivars: Registered efivars operations Feb 13 19:03:10.950066 kernel: vgaarb: loaded Feb 13 19:03:10.950073 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:03:10.950080 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:03:10.950087 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:03:10.950095 kernel: pnp: PnP ACPI init Feb 13 19:03:10.950176 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:03:10.950189 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:03:10.950197 kernel: NET: Registered PF_INET protocol family Feb 13 19:03:10.950205 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:03:10.950212 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:03:10.950219 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:03:10.950227 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:03:10.950234 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:03:10.950241 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:03:10.950248 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:03:10.950257 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:03:10.950265 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:03:10.950272 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:03:10.950279 kernel: kvm [1]: HYP mode not available Feb 13 19:03:10.950286 kernel: Initialise system trusted keyrings Feb 13 19:03:10.950293 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:03:10.950300 kernel: Key type asymmetric registered Feb 13 19:03:10.950307 kernel: Asymmetric key parser 'x509' registered Feb 13 19:03:10.950315 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:03:10.950324 kernel: io scheduler mq-deadline registered Feb 13 19:03:10.950331 kernel: io scheduler kyber registered Feb 13 19:03:10.950338 kernel: io scheduler bfq registered Feb 13 19:03:10.950345 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:03:10.950352 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:03:10.950360 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:03:10.950431 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:03:10.950441 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:03:10.950448 kernel: thunder_xcv, ver 1.0 Feb 13 19:03:10.950457 kernel: thunder_bgx, ver 1.0 Feb 13 19:03:10.950464 kernel: nicpf, ver 1.0 Feb 13 19:03:10.950471 kernel: nicvf, ver 1.0 Feb 13 19:03:10.950552 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:03:10.950616 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:03:10 UTC (1739473390) Feb 13 19:03:10.950626 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:03:10.950633 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:03:10.950641 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:03:10.950650 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:03:10.950658 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:03:10.950665 kernel: Segment Routing with IPv6 Feb 13 19:03:10.950673 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:03:10.950680 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:03:10.950687 kernel: Key type dns_resolver registered Feb 13 19:03:10.950695 kernel: registered taskstats version 1 Feb 13 19:03:10.950702 kernel: Loading compiled-in X.509 certificates Feb 13 19:03:10.950709 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:03:10.950719 kernel: Key type .fscrypt registered Feb 13 19:03:10.950726 kernel: Key type fscrypt-provisioning registered Feb 13 19:03:10.950733 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:03:10.950740 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:03:10.950748 kernel: ima: No architecture policies found Feb 13 19:03:10.950755 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:03:10.950762 kernel: clk: Disabling unused clocks Feb 13 19:03:10.950769 kernel: Freeing unused kernel memory: 38336K Feb 13 19:03:10.950776 kernel: Run /init as init process Feb 13 19:03:10.950785 kernel: with arguments: Feb 13 19:03:10.950792 kernel: /init Feb 13 19:03:10.950799 kernel: with environment: Feb 13 19:03:10.950806 kernel: HOME=/ Feb 13 19:03:10.950814 kernel: TERM=linux Feb 13 19:03:10.950821 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:03:10.950829 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:03:10.950839 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:03:10.950849 systemd[1]: Detected virtualization kvm. Feb 13 19:03:10.950857 systemd[1]: Detected architecture arm64. Feb 13 19:03:10.950864 systemd[1]: Running in initrd. Feb 13 19:03:10.950872 systemd[1]: No hostname configured, using default hostname. Feb 13 19:03:10.950880 systemd[1]: Hostname set to . Feb 13 19:03:10.950887 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:03:10.950895 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:03:10.950903 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:10.950929 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:10.950938 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:03:10.950945 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:03:10.950954 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:03:10.950962 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:03:10.950971 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:03:10.950981 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:03:10.950989 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:10.950997 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:10.951004 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:03:10.951012 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:03:10.951020 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:03:10.951028 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:03:10.951035 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:03:10.951043 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:03:10.951052 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:03:10.951060 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:03:10.951068 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:10.951076 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:10.951084 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:10.951091 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:03:10.951099 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:03:10.951107 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:03:10.951116 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:03:10.951124 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:03:10.951132 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:03:10.951139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:03:10.951152 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:10.951161 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:03:10.951169 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:10.951179 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:03:10.951187 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:03:10.951195 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:03:10.951203 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:03:10.951211 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:10.951218 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:03:10.951246 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 19:03:10.951264 kernel: Bridge firewalling registered Feb 13 19:03:10.951272 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:10.951280 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:10.951290 systemd-journald[239]: Journal started Feb 13 19:03:10.951308 systemd-journald[239]: Runtime Journal (/run/log/journal/a3295a52ecaf420d979218a768442f74) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:03:10.927662 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 19:03:10.954773 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:03:10.946534 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 19:03:10.956095 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:10.968088 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:10.971138 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:03:10.974166 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:10.977357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:10.979792 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:03:10.983096 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:10.987081 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:03:10.996166 dracut-cmdline[278]: dracut-dracut-053 Feb 13 19:03:10.998882 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:03:11.037666 systemd-resolved[280]: Positive Trust Anchors: Feb 13 19:03:11.037685 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:03:11.037717 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:03:11.048179 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 19:03:11.049437 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:03:11.050748 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:11.091951 kernel: SCSI subsystem initialized Feb 13 19:03:11.100953 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:03:11.108956 kernel: iscsi: registered transport (tcp) Feb 13 19:03:11.130987 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:03:11.131048 kernel: QLogic iSCSI HBA Driver Feb 13 19:03:11.176669 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:03:11.185084 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:03:11.209926 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:03:11.209995 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:03:11.210006 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:03:11.256940 kernel: raid6: neonx8 gen() 15783 MB/s Feb 13 19:03:11.273943 kernel: raid6: neonx4 gen() 15769 MB/s Feb 13 19:03:11.290947 kernel: raid6: neonx2 gen() 13183 MB/s Feb 13 19:03:11.307933 kernel: raid6: neonx1 gen() 10425 MB/s Feb 13 19:03:11.324957 kernel: raid6: int64x8 gen() 6755 MB/s Feb 13 19:03:11.341945 kernel: raid6: int64x4 gen() 7338 MB/s Feb 13 19:03:11.358950 kernel: raid6: int64x2 gen() 6109 MB/s Feb 13 19:03:11.376208 kernel: raid6: int64x1 gen() 5030 MB/s Feb 13 19:03:11.376264 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Feb 13 19:03:11.394231 kernel: raid6: .... xor() 11878 MB/s, rmw enabled Feb 13 19:03:11.394295 kernel: raid6: using neon recovery algorithm Feb 13 19:03:11.401039 kernel: xor: measuring software checksum speed Feb 13 19:03:11.401099 kernel: 8regs : 21613 MB/sec Feb 13 19:03:11.402397 kernel: 32regs : 20670 MB/sec Feb 13 19:03:11.402430 kernel: arm64_neon : 27851 MB/sec Feb 13 19:03:11.402450 kernel: xor: using function: arm64_neon (27851 MB/sec) Feb 13 19:03:11.481961 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:03:11.506622 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:03:11.516195 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:11.537843 systemd-udevd[462]: Using default interface naming scheme 'v255'. Feb 13 19:03:11.545458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:11.558114 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:03:11.570714 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Feb 13 19:03:11.602018 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:03:11.615120 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:03:11.658364 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:11.667452 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:03:11.680161 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:03:11.690183 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:03:11.691566 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:11.694120 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:03:11.702105 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:03:11.716098 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:03:11.733935 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:03:11.749767 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:03:11.749884 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:03:11.749895 kernel: GPT:9289727 != 19775487 Feb 13 19:03:11.749918 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:03:11.749929 kernel: GPT:9289727 != 19775487 Feb 13 19:03:11.749940 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:03:11.749949 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:03:11.738989 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:03:11.739120 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:11.751155 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:11.754565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:03:11.754754 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:11.758287 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:11.767184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:11.778011 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (517) Feb 13 19:03:11.779982 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (521) Feb 13 19:03:11.782590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:11.793139 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:03:11.801365 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:03:11.819044 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:03:11.825806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:03:11.827310 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:03:11.845118 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:03:11.850115 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:11.869621 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:11.876828 disk-uuid[553]: Primary Header is updated. Feb 13 19:03:11.876828 disk-uuid[553]: Secondary Entries is updated. Feb 13 19:03:11.876828 disk-uuid[553]: Secondary Header is updated. Feb 13 19:03:11.880932 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:03:12.896949 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:03:12.897123 disk-uuid[562]: The operation has completed successfully. Feb 13 19:03:12.925475 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:03:12.925577 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:03:12.961101 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:03:12.965259 sh[571]: Success Feb 13 19:03:12.980660 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:03:13.021371 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:03:13.036377 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:03:13.038855 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:03:13.049578 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:03:13.049627 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:13.049637 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:03:13.051480 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:03:13.052236 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:03:13.055592 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:03:13.057280 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:03:13.067085 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:03:13.069313 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:03:13.081108 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:03:13.081172 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:13.081196 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:03:13.083947 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:03:13.094687 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:03:13.096937 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:03:13.102496 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:03:13.113154 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:03:13.211956 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:03:13.233175 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:03:13.265784 ignition[664]: Ignition 2.20.0 Feb 13 19:03:13.265795 ignition[664]: Stage: fetch-offline Feb 13 19:03:13.265835 ignition[664]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:13.265844 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:03:13.266132 ignition[664]: parsed url from cmdline: "" Feb 13 19:03:13.266136 ignition[664]: no config URL provided Feb 13 19:03:13.266140 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:03:13.266156 ignition[664]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:03:13.266183 ignition[664]: op(1): [started] loading QEMU firmware config module Feb 13 19:03:13.274092 systemd-networkd[762]: lo: Link UP Feb 13 19:03:13.266188 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:03:13.274095 systemd-networkd[762]: lo: Gained carrier Feb 13 19:03:13.275106 systemd-networkd[762]: Enumeration completed Feb 13 19:03:13.275793 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:13.275796 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:03:13.276480 systemd-networkd[762]: eth0: Link UP Feb 13 19:03:13.276483 systemd-networkd[762]: eth0: Gained carrier Feb 13 19:03:13.285058 ignition[664]: op(1): [finished] loading QEMU firmware config module Feb 13 19:03:13.276490 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:13.285085 ignition[664]: QEMU firmware config was not found. Ignoring... Feb 13 19:03:13.278039 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:03:13.294181 ignition[664]: parsing config with SHA512: 8f69ad1f6addead4108f3dffc7f59dee52273e0bcb8e677dedb0da53ca5d3303fd69b0defe7b56150f471bb28397e6bc99d4793130f750e3bb843c6ed343d337 Feb 13 19:03:13.279925 systemd[1]: Reached target network.target - Network. Feb 13 19:03:13.295956 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:03:13.297736 unknown[664]: fetched base config from "system" Feb 13 19:03:13.298013 ignition[664]: fetch-offline: fetch-offline passed Feb 13 19:03:13.297743 unknown[664]: fetched user config from "qemu" Feb 13 19:03:13.298094 ignition[664]: Ignition finished successfully Feb 13 19:03:13.299820 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:03:13.301568 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:03:13.308076 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:03:13.331116 ignition[771]: Ignition 2.20.0 Feb 13 19:03:13.331127 ignition[771]: Stage: kargs Feb 13 19:03:13.331300 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:13.331310 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:03:13.331993 ignition[771]: kargs: kargs passed Feb 13 19:03:13.332037 ignition[771]: Ignition finished successfully Feb 13 19:03:13.335402 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:03:13.345089 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:03:13.354867 ignition[778]: Ignition 2.20.0 Feb 13 19:03:13.354876 ignition[778]: Stage: disks Feb 13 19:03:13.355133 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:13.355151 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:03:13.357729 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:03:13.355814 ignition[778]: disks: disks passed Feb 13 19:03:13.359757 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:03:13.355857 ignition[778]: Ignition finished successfully Feb 13 19:03:13.361613 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:03:13.363391 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:03:13.365326 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:03:13.367000 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:03:13.378105 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:03:13.390244 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:03:13.458577 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:03:13.649685 systemd-resolved[280]: Detected conflict on linux IN A 10.0.0.49 Feb 13 19:03:13.649700 systemd-resolved[280]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Feb 13 19:03:14.057064 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:03:14.105922 kernel: EXT4-fs (vda9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:03:14.107427 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:03:14.109801 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:03:14.123052 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:03:14.126979 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:03:14.129185 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:03:14.139397 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) Feb 13 19:03:14.139428 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:03:14.139439 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:14.129249 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:03:14.143616 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:03:14.129276 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:03:14.139662 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:03:14.147781 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:03:14.163173 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:03:14.166828 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:03:14.207833 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:03:14.211423 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:03:14.220599 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:03:14.226397 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:03:14.342804 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:03:14.364401 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:03:14.370924 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:03:14.373451 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:03:14.399648 ignition[911]: INFO : Ignition 2.20.0 Feb 13 19:03:14.399648 ignition[911]: INFO : Stage: mount Feb 13 19:03:14.399648 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:14.399648 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:03:14.407403 ignition[911]: INFO : mount: mount passed Feb 13 19:03:14.407403 ignition[911]: INFO : Ignition finished successfully Feb 13 19:03:14.402040 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:03:14.408393 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:03:14.415933 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:03:14.641071 systemd-networkd[762]: eth0: Gained IPv6LL Feb 13 19:03:15.048718 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:03:15.064161 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:03:15.072115 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Feb 13 19:03:15.076225 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:03:15.076283 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:15.076300 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:03:15.079937 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:03:15.081266 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:03:15.106825 ignition[942]: INFO : Ignition 2.20.0 Feb 13 19:03:15.106825 ignition[942]: INFO : Stage: files Feb 13 19:03:15.108650 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:15.108650 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:03:15.108650 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:03:15.112449 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:03:15.112449 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:03:15.115638 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:03:15.115638 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:03:15.115638 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:03:15.115157 unknown[942]: wrote ssh authorized keys file for user: core Feb 13 19:03:15.121829 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:03:15.121829 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:03:15.121829 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:03:15.121829 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:03:15.121829 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:03:15.121829 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:03:15.121829 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:03:15.121829 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 19:03:15.289773 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:03:15.545979 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:03:15.545979 ignition[942]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:03:15.550087 ignition[942]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:03:15.550087 ignition[942]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:03:15.550087 ignition[942]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:03:15.550087 ignition[942]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:03:15.574236 ignition[942]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:03:15.577910 ignition[942]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:03:15.579580 ignition[942]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:03:15.579580 ignition[942]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:03:15.579580 ignition[942]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:03:15.579580 ignition[942]: INFO : files: files passed Feb 13 19:03:15.579580 ignition[942]: INFO : Ignition finished successfully Feb 13 19:03:15.581484 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:03:15.591122 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:03:15.593542 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:03:15.596158 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:03:15.596282 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:03:15.602254 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:03:15.606710 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:15.606710 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:15.610601 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:15.612673 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:03:15.616120 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:03:15.632119 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:03:15.654776 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:03:15.654901 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:03:15.657591 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:03:15.658713 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:03:15.659821 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:03:15.660743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:03:15.680221 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:03:15.698194 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:03:15.707734 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:15.709149 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:15.711285 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:03:15.713317 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:03:15.713464 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:03:15.716199 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:03:15.717339 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:03:15.719212 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:03:15.721082 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:03:15.722925 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:03:15.724983 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:03:15.727129 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:03:15.729389 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:03:15.731339 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:03:15.733422 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:03:15.735029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:03:15.735183 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:03:15.737800 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:15.739923 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:15.742092 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:03:15.743049 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:15.744399 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:03:15.744558 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:03:15.747327 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:03:15.747466 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:03:15.749951 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:03:15.751674 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:03:15.754981 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:15.756386 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:03:15.758193 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:03:15.760302 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:03:15.760412 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:03:15.762829 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:03:15.762937 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:03:15.764944 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:03:15.765073 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:03:15.767060 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:03:15.767186 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:03:15.781144 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:03:15.784423 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:03:15.785394 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:03:15.785548 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:15.787676 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:03:15.787789 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:03:15.794880 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:03:15.795011 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:03:15.800585 ignition[997]: INFO : Ignition 2.20.0 Feb 13 19:03:15.800585 ignition[997]: INFO : Stage: umount Feb 13 19:03:15.800585 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:15.800585 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:03:15.805110 ignition[997]: INFO : umount: umount passed Feb 13 19:03:15.805110 ignition[997]: INFO : Ignition finished successfully Feb 13 19:03:15.804712 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:03:15.805018 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:03:15.808249 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:03:15.808981 systemd[1]: Stopped target network.target - Network. Feb 13 19:03:15.810971 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:03:15.811043 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:03:15.812995 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:03:15.813062 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:03:15.815105 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:03:15.815164 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:03:15.816862 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:03:15.816922 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:03:15.818982 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:03:15.820667 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:03:15.830703 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:03:15.831115 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:03:15.836736 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:03:15.837076 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:03:15.837228 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:03:15.840642 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:03:15.841777 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:03:15.841872 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:15.863940 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:03:15.864984 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:03:15.865068 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:03:15.867394 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:03:15.867453 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:15.878180 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:03:15.878255 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:15.881024 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:03:15.881087 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:15.885018 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:15.888487 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:03:15.888579 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:03:15.896351 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:03:15.896486 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:15.899976 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:03:15.900032 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:15.901235 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:03:15.901271 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:15.903194 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:03:15.903253 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:03:15.906033 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:03:15.906084 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:03:15.908989 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:03:15.909034 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:15.911426 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:03:15.911481 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:03:15.914416 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:03:15.915639 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:03:15.915702 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:15.919294 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:03:15.919342 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:03:15.921772 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:03:15.921820 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:15.923999 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:03:15.924049 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:15.927591 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:03:15.927750 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:03:15.929791 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:03:15.929867 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:03:15.932121 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:03:15.942129 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:03:15.950529 systemd[1]: Switching root. Feb 13 19:03:15.975196 systemd-journald[239]: Journal stopped Feb 13 19:03:16.884372 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 19:03:16.887660 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:03:16.887678 kernel: SELinux: policy capability open_perms=1 Feb 13 19:03:16.887688 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:03:16.887698 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:03:16.887707 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:03:16.887718 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:03:16.887727 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:03:16.887740 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:03:16.887750 kernel: audit: type=1403 audit(1739473396.134:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:03:16.887761 systemd[1]: Successfully loaded SELinux policy in 45.189ms. Feb 13 19:03:16.887785 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.413ms. Feb 13 19:03:16.887797 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:03:16.887810 systemd[1]: Detected virtualization kvm. Feb 13 19:03:16.887821 systemd[1]: Detected architecture arm64. Feb 13 19:03:16.887831 systemd[1]: Detected first boot. Feb 13 19:03:16.887842 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:03:16.887853 zram_generator::config[1045]: No configuration found. Feb 13 19:03:16.887864 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:03:16.887877 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:03:16.887888 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:03:16.887900 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:03:16.887920 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:03:16.887932 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:03:16.887943 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:03:16.887953 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:03:16.887964 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:03:16.887974 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:03:16.887984 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:03:16.887994 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:03:16.888006 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:03:16.888017 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:03:16.888026 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:16.888037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:16.888047 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:03:16.888057 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:03:16.888067 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:03:16.888078 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:03:16.888088 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:03:16.888099 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:16.888111 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:03:16.888121 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:03:16.888131 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:03:16.888149 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:03:16.888163 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:16.888174 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:03:16.888187 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:03:16.888198 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:03:16.888208 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:03:16.888218 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:03:16.888229 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:03:16.888239 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:16.888249 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:16.888259 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:16.888269 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:03:16.888280 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:03:16.888293 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:03:16.888303 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:03:16.888313 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:03:16.888323 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:03:16.888334 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:03:16.888347 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:03:16.888357 systemd[1]: Reached target machines.target - Containers. Feb 13 19:03:16.888369 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:03:16.888381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:16.888391 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:03:16.888401 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:03:16.888412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:03:16.888422 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:03:16.888433 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:03:16.888443 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:03:16.888453 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:03:16.888463 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:03:16.888475 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:03:16.888485 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:03:16.888494 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:03:16.888504 kernel: fuse: init (API version 7.39) Feb 13 19:03:16.888513 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:03:16.888523 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:03:16.888534 kernel: loop: module loaded Feb 13 19:03:16.888542 kernel: ACPI: bus type drm_connector registered Feb 13 19:03:16.888558 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:03:16.888569 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:03:16.888579 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:03:16.888590 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:03:16.888601 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:03:16.888613 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:03:16.888624 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:03:16.888634 systemd[1]: Stopped verity-setup.service. Feb 13 19:03:16.888675 systemd-journald[1113]: Collecting audit messages is disabled. Feb 13 19:03:16.888698 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:03:16.888709 systemd-journald[1113]: Journal started Feb 13 19:03:16.888732 systemd-journald[1113]: Runtime Journal (/run/log/journal/a3295a52ecaf420d979218a768442f74) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:03:16.670670 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:03:16.679838 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:03:16.680225 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:03:16.892630 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:03:16.893316 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:03:16.894624 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:03:16.895820 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:03:16.897131 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:03:16.898380 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:03:16.900814 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:03:16.903328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:16.904990 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:03:16.905178 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:03:16.906747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:03:16.906982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:03:16.908361 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:03:16.908521 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:03:16.909870 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:03:16.910056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:03:16.911648 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:03:16.911820 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:03:16.913194 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:03:16.913350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:03:16.914946 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:16.916375 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:03:16.918115 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:03:16.919666 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:03:16.931725 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:03:16.947041 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:03:16.949429 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:03:16.950620 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:03:16.950667 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:03:16.952767 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:03:16.955314 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:03:16.957595 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:03:16.958825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:16.959953 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:03:16.963196 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:03:16.964459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:03:16.966219 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:03:16.967421 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:03:16.970118 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:16.971876 systemd-journald[1113]: Time spent on flushing to /var/log/journal/a3295a52ecaf420d979218a768442f74 is 13.752ms for 852 entries. Feb 13 19:03:16.971876 systemd-journald[1113]: System Journal (/var/log/journal/a3295a52ecaf420d979218a768442f74) is 8M, max 195.6M, 187.6M free. Feb 13 19:03:17.013504 systemd-journald[1113]: Received client request to flush runtime journal. Feb 13 19:03:17.013546 kernel: loop0: detected capacity change from 0 to 113512 Feb 13 19:03:17.013569 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:03:16.973161 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:03:16.980111 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:03:16.983756 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:16.985676 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:03:16.993518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:03:16.996184 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:03:16.998014 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:03:17.002314 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:03:17.016830 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:03:17.019855 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:03:17.022743 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:03:17.025238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:17.031922 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Feb 13 19:03:17.031939 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Feb 13 19:03:17.038011 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:03:17.046410 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:03:17.048070 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:03:17.050507 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:03:17.058941 kernel: loop1: detected capacity change from 0 to 123192 Feb 13 19:03:17.071192 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:03:17.079152 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:03:17.089941 kernel: loop2: detected capacity change from 0 to 201592 Feb 13 19:03:17.092663 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Feb 13 19:03:17.092687 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Feb 13 19:03:17.097292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:17.136950 kernel: loop3: detected capacity change from 0 to 113512 Feb 13 19:03:17.142943 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 19:03:17.147927 kernel: loop5: detected capacity change from 0 to 201592 Feb 13 19:03:17.155113 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:03:17.155524 (sd-merge)[1192]: Merged extensions into '/usr'. Feb 13 19:03:17.161356 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:03:17.161371 systemd[1]: Reloading... Feb 13 19:03:17.215959 zram_generator::config[1220]: No configuration found. Feb 13 19:03:17.286004 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:03:17.311278 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:17.361021 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:03:17.361665 systemd[1]: Reloading finished in 199 ms. Feb 13 19:03:17.389836 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:03:17.392940 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:03:17.407289 systemd[1]: Starting ensure-sysext.service... Feb 13 19:03:17.409530 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:03:17.425074 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:03:17.425222 systemd[1]: Reloading... Feb 13 19:03:17.478953 zram_generator::config[1287]: No configuration found. Feb 13 19:03:17.496523 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:03:17.496733 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:03:17.497433 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:03:17.497640 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Feb 13 19:03:17.497685 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Feb 13 19:03:17.500203 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:03:17.500215 systemd-tmpfiles[1255]: Skipping /boot Feb 13 19:03:17.508775 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:03:17.508793 systemd-tmpfiles[1255]: Skipping /boot Feb 13 19:03:17.605961 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:17.662986 systemd[1]: Reloading finished in 237 ms. Feb 13 19:03:17.672739 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:03:17.689952 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:17.698751 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:03:17.701556 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:03:17.704068 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:03:17.710292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:03:17.714529 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:17.723598 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:03:17.729101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:17.730896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:03:17.740599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:03:17.743226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:03:17.744442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:17.744569 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:03:17.749432 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:03:17.752228 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:03:17.754249 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:03:17.754566 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:03:17.756643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:03:17.756842 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:03:17.759312 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:03:17.759546 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:03:17.771717 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:03:17.771898 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:03:17.772687 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Feb 13 19:03:17.774201 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:03:17.779193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:17.789257 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:03:17.793244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:03:17.794841 augenrules[1355]: No rules Feb 13 19:03:17.796207 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:03:17.797407 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:17.797524 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:03:17.798514 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:03:17.799107 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:03:17.802947 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:03:17.805120 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:03:17.807065 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:03:17.808638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:17.811095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:03:17.811284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:03:17.814535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:03:17.814719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:03:17.816444 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:03:17.816598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:03:17.830992 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:03:17.832765 systemd[1]: Finished ensure-sysext.service. Feb 13 19:03:17.857251 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:03:17.858331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:17.859445 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:03:17.863763 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:03:17.867100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:03:17.870143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:03:17.871272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:17.871338 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:03:17.873309 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:03:17.881600 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:03:17.882849 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:03:17.885247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:03:17.885451 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:03:17.888376 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:03:17.888542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:03:17.890101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:03:17.890275 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:03:17.893142 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:03:17.893306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:03:17.901973 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:03:17.904933 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1373) Feb 13 19:03:17.913089 augenrules[1393]: /sbin/augenrules: No change Feb 13 19:03:17.923628 augenrules[1424]: No rules Feb 13 19:03:17.925710 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:03:17.925924 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:03:17.932810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:03:17.943088 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:03:17.944729 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:03:17.944774 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:03:17.981415 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:03:17.998287 systemd-resolved[1324]: Positive Trust Anchors: Feb 13 19:03:17.998304 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:03:17.998337 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:03:18.011299 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:03:18.013127 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:03:18.018530 systemd-resolved[1324]: Defaulting to hostname 'linux'. Feb 13 19:03:18.020102 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:03:18.021343 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:18.032588 systemd-networkd[1400]: lo: Link UP Feb 13 19:03:18.032597 systemd-networkd[1400]: lo: Gained carrier Feb 13 19:03:18.039413 systemd-networkd[1400]: Enumeration completed Feb 13 19:03:18.040454 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:03:18.043431 systemd[1]: Reached target network.target - Network. Feb 13 19:03:18.044197 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:18.044209 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:03:18.044791 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:18.044823 systemd-networkd[1400]: eth0: Link UP Feb 13 19:03:18.044825 systemd-networkd[1400]: eth0: Gained carrier Feb 13 19:03:18.044833 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:18.062116 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:03:18.064556 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:03:18.066839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:18.068396 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:03:18.074999 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:03:18.079233 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:03:18.083061 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Feb 13 19:03:18.086266 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:03:18.087648 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:03:18.087706 systemd-timesyncd[1402]: Initial clock synchronization to Thu 2025-02-13 19:03:17.866333 UTC. Feb 13 19:03:18.091121 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:03:18.114275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:18.123366 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:03:18.124827 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:18.127107 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:03:18.128279 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:03:18.129537 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:03:18.130961 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:03:18.132100 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:03:18.133309 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:03:18.134574 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:03:18.134613 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:03:18.135546 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:03:18.138979 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:03:18.141458 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:03:18.144706 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:03:18.146243 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:03:18.147576 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:03:18.151931 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:03:18.153739 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:03:18.156255 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:03:18.158207 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:03:18.159408 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:03:18.160494 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:03:18.161494 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:03:18.161526 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:03:18.162505 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:03:18.164336 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:03:18.165110 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:03:18.167776 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:03:18.173170 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:03:18.174294 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:03:18.175534 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:03:18.180203 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:03:18.181721 jq[1458]: false Feb 13 19:03:18.187355 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:03:18.194117 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:03:18.194707 dbus-daemon[1457]: [system] SELinux support is enabled Feb 13 19:03:18.194989 extend-filesystems[1459]: Found loop3 Feb 13 19:03:18.194989 extend-filesystems[1459]: Found loop4 Feb 13 19:03:18.194989 extend-filesystems[1459]: Found loop5 Feb 13 19:03:18.194989 extend-filesystems[1459]: Found vda Feb 13 19:03:18.194989 extend-filesystems[1459]: Found vda1 Feb 13 19:03:18.206063 extend-filesystems[1459]: Found vda2 Feb 13 19:03:18.206063 extend-filesystems[1459]: Found vda3 Feb 13 19:03:18.206063 extend-filesystems[1459]: Found usr Feb 13 19:03:18.206063 extend-filesystems[1459]: Found vda4 Feb 13 19:03:18.206063 extend-filesystems[1459]: Found vda6 Feb 13 19:03:18.206063 extend-filesystems[1459]: Found vda7 Feb 13 19:03:18.206063 extend-filesystems[1459]: Found vda9 Feb 13 19:03:18.206063 extend-filesystems[1459]: Checking size of /dev/vda9 Feb 13 19:03:18.196315 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:03:18.222855 extend-filesystems[1459]: Resized partition /dev/vda9 Feb 13 19:03:18.196847 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:03:18.198144 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:03:18.224797 jq[1475]: true Feb 13 19:03:18.203153 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:03:18.208668 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:03:18.215899 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:03:18.228551 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:03:18.228733 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:03:18.229075 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:03:18.229319 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:03:18.237152 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:03:18.237349 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:03:18.239662 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:03:18.249858 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:03:18.261931 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:03:18.262003 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1375) Feb 13 19:03:18.263195 jq[1482]: true Feb 13 19:03:18.273245 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:03:18.273294 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:03:18.274889 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:03:18.275861 update_engine[1472]: I20250213 19:03:18.275149 1472 main.cc:92] Flatcar Update Engine starting Feb 13 19:03:18.274921 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:03:18.284502 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:03:18.288199 update_engine[1472]: I20250213 19:03:18.286973 1472 update_check_scheduler.cc:74] Next update check in 10m4s Feb 13 19:03:18.287612 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:03:18.301308 systemd-logind[1469]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:03:18.302550 systemd-logind[1469]: New seat seat0. Feb 13 19:03:18.303970 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:03:18.331823 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:03:18.343315 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:03:18.343315 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:03:18.343315 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:03:18.349864 extend-filesystems[1459]: Resized filesystem in /dev/vda9 Feb 13 19:03:18.346739 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:03:18.346994 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:03:18.356407 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:03:18.358502 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:03:18.359469 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:03:18.362040 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:03:18.521643 containerd[1483]: time="2025-02-13T19:03:18.520223480Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:03:18.548359 containerd[1483]: time="2025-02-13T19:03:18.548299240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:18.549899 containerd[1483]: time="2025-02-13T19:03:18.549862360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:18.550000 containerd[1483]: time="2025-02-13T19:03:18.549985520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:03:18.550058 containerd[1483]: time="2025-02-13T19:03:18.550044920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:03:18.550281 containerd[1483]: time="2025-02-13T19:03:18.550259680Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:03:18.550351 containerd[1483]: time="2025-02-13T19:03:18.550338240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:18.550478 containerd[1483]: time="2025-02-13T19:03:18.550460400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:18.550531 containerd[1483]: time="2025-02-13T19:03:18.550518800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:18.550787 containerd[1483]: time="2025-02-13T19:03:18.550764640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:18.550850 containerd[1483]: time="2025-02-13T19:03:18.550837160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:18.550918 containerd[1483]: time="2025-02-13T19:03:18.550890920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:18.550985 containerd[1483]: time="2025-02-13T19:03:18.550970960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:18.551122 containerd[1483]: time="2025-02-13T19:03:18.551105800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:18.551398 containerd[1483]: time="2025-02-13T19:03:18.551377040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:18.551607 containerd[1483]: time="2025-02-13T19:03:18.551586800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:18.551663 containerd[1483]: time="2025-02-13T19:03:18.551650960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:03:18.551790 containerd[1483]: time="2025-02-13T19:03:18.551773440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:03:18.551886 containerd[1483]: time="2025-02-13T19:03:18.551871240Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:03:18.557318 containerd[1483]: time="2025-02-13T19:03:18.557285120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:03:18.557454 containerd[1483]: time="2025-02-13T19:03:18.557433720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:03:18.557557 containerd[1483]: time="2025-02-13T19:03:18.557539680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:03:18.557623 containerd[1483]: time="2025-02-13T19:03:18.557610520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:03:18.557675 containerd[1483]: time="2025-02-13T19:03:18.557664160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:03:18.557865 containerd[1483]: time="2025-02-13T19:03:18.557845000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:03:18.558219 containerd[1483]: time="2025-02-13T19:03:18.558200800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:03:18.558389 containerd[1483]: time="2025-02-13T19:03:18.558370360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:03:18.558476 containerd[1483]: time="2025-02-13T19:03:18.558462640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:03:18.558534 containerd[1483]: time="2025-02-13T19:03:18.558522000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:03:18.558591 containerd[1483]: time="2025-02-13T19:03:18.558578200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:03:18.558644 containerd[1483]: time="2025-02-13T19:03:18.558631880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:03:18.558705 containerd[1483]: time="2025-02-13T19:03:18.558691840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:03:18.558757 containerd[1483]: time="2025-02-13T19:03:18.558744720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:03:18.558813 containerd[1483]: time="2025-02-13T19:03:18.558801000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:03:18.558865 containerd[1483]: time="2025-02-13T19:03:18.558853840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:03:18.558935 containerd[1483]: time="2025-02-13T19:03:18.558920960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:03:18.558989 containerd[1483]: time="2025-02-13T19:03:18.558975720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:03:18.559065 containerd[1483]: time="2025-02-13T19:03:18.559052240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.559119 containerd[1483]: time="2025-02-13T19:03:18.559106840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.559184 containerd[1483]: time="2025-02-13T19:03:18.559171440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.559258 containerd[1483]: time="2025-02-13T19:03:18.559244440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.559332 containerd[1483]: time="2025-02-13T19:03:18.559318880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.559440 containerd[1483]: time="2025-02-13T19:03:18.559422000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.559513 containerd[1483]: time="2025-02-13T19:03:18.559499920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.559553240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.559573320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.559592000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.559605200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.559619600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.559639360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.559654360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.559677040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.559690680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.559702240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.560000400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.560020880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:03:18.560937 containerd[1483]: time="2025-02-13T19:03:18.560032320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:03:18.561204 containerd[1483]: time="2025-02-13T19:03:18.560044400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:03:18.561204 containerd[1483]: time="2025-02-13T19:03:18.560053480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.561204 containerd[1483]: time="2025-02-13T19:03:18.560065120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:03:18.561204 containerd[1483]: time="2025-02-13T19:03:18.560075440Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:03:18.561204 containerd[1483]: time="2025-02-13T19:03:18.560086160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:03:18.561292 containerd[1483]: time="2025-02-13T19:03:18.560363040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:03:18.561292 containerd[1483]: time="2025-02-13T19:03:18.560409720Z" level=info msg="Connect containerd service" Feb 13 19:03:18.561292 containerd[1483]: time="2025-02-13T19:03:18.560437760Z" level=info msg="using legacy CRI server" Feb 13 19:03:18.561292 containerd[1483]: time="2025-02-13T19:03:18.560444560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:03:18.561292 containerd[1483]: time="2025-02-13T19:03:18.560772480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:03:18.567393 containerd[1483]: time="2025-02-13T19:03:18.567349440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:03:18.567837 containerd[1483]: time="2025-02-13T19:03:18.567645160Z" level=info msg="Start subscribing containerd event" Feb 13 19:03:18.567837 containerd[1483]: time="2025-02-13T19:03:18.567707440Z" level=info msg="Start recovering state" Feb 13 19:03:18.567837 containerd[1483]: time="2025-02-13T19:03:18.567782200Z" level=info msg="Start event monitor" Feb 13 19:03:18.567837 containerd[1483]: time="2025-02-13T19:03:18.567802920Z" level=info msg="Start snapshots syncer" Feb 13 19:03:18.567837 containerd[1483]: time="2025-02-13T19:03:18.567814800Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:03:18.567837 containerd[1483]: time="2025-02-13T19:03:18.567821520Z" level=info msg="Start streaming server" Feb 13 19:03:18.568541 containerd[1483]: time="2025-02-13T19:03:18.568520760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:03:18.568702 containerd[1483]: time="2025-02-13T19:03:18.568680600Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:03:18.568842 containerd[1483]: time="2025-02-13T19:03:18.568828640Z" level=info msg="containerd successfully booted in 0.049996s" Feb 13 19:03:18.568932 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:03:19.633066 systemd-networkd[1400]: eth0: Gained IPv6LL Feb 13 19:03:19.636948 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:03:19.639313 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:03:19.661224 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:03:19.663867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:19.666078 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:03:19.681125 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:03:19.683074 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:03:19.685401 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:03:19.691348 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:03:19.886068 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:03:19.904558 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:03:19.915169 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:03:19.919605 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:03:19.919796 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:03:19.923117 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:03:19.933681 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:03:19.944340 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:03:19.946717 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:03:19.948094 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:03:20.198082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:20.199737 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:03:20.201119 systemd[1]: Startup finished in 618ms (kernel) + 5.393s (initrd) + 4.124s (userspace) = 10.136s. Feb 13 19:03:20.201987 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:03:20.602644 kubelet[1562]: E0213 19:03:20.602519 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:03:20.604662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:03:20.604818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:03:20.605117 systemd[1]: kubelet.service: Consumed 789ms CPU time, 249.5M memory peak. Feb 13 19:03:24.105075 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:03:24.106233 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:42686.service - OpenSSH per-connection server daemon (10.0.0.1:42686). Feb 13 19:03:24.170783 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 42686 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:24.172676 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:24.188841 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:03:24.198199 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:03:24.206069 systemd-logind[1469]: New session 1 of user core. Feb 13 19:03:24.209618 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:03:24.212326 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:03:24.218673 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:03:24.224499 systemd-logind[1469]: New session c1 of user core. Feb 13 19:03:24.343401 systemd[1579]: Queued start job for default target default.target. Feb 13 19:03:24.356964 systemd[1579]: Created slice app.slice - User Application Slice. Feb 13 19:03:24.356995 systemd[1579]: Reached target paths.target - Paths. Feb 13 19:03:24.357032 systemd[1579]: Reached target timers.target - Timers. Feb 13 19:03:24.359690 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:03:24.368826 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:03:24.368981 systemd[1579]: Reached target sockets.target - Sockets. Feb 13 19:03:24.369040 systemd[1579]: Reached target basic.target - Basic System. Feb 13 19:03:24.369073 systemd[1579]: Reached target default.target - Main User Target. Feb 13 19:03:24.369099 systemd[1579]: Startup finished in 138ms. Feb 13 19:03:24.369179 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:03:24.370860 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:03:24.436559 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:42696.service - OpenSSH per-connection server daemon (10.0.0.1:42696). Feb 13 19:03:24.485041 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 42696 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:24.486309 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:24.490524 systemd-logind[1469]: New session 2 of user core. Feb 13 19:03:24.499118 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:03:24.550033 sshd[1592]: Connection closed by 10.0.0.1 port 42696 Feb 13 19:03:24.550532 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:24.560955 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:42696.service: Deactivated successfully. Feb 13 19:03:24.562406 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:03:24.564120 systemd-logind[1469]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:03:24.574235 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:42700.service - OpenSSH per-connection server daemon (10.0.0.1:42700). Feb 13 19:03:24.575260 systemd-logind[1469]: Removed session 2. Feb 13 19:03:24.623639 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 42700 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:24.624829 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:24.630291 systemd-logind[1469]: New session 3 of user core. Feb 13 19:03:24.642116 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:03:24.690477 sshd[1600]: Connection closed by 10.0.0.1 port 42700 Feb 13 19:03:24.690998 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:24.704968 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:42700.service: Deactivated successfully. Feb 13 19:03:24.706381 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:03:24.707084 systemd-logind[1469]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:03:24.708692 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:42706.service - OpenSSH per-connection server daemon (10.0.0.1:42706). Feb 13 19:03:24.711333 systemd-logind[1469]: Removed session 3. Feb 13 19:03:24.752882 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 42706 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:24.754090 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:24.758886 systemd-logind[1469]: New session 4 of user core. Feb 13 19:03:24.768062 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:03:24.819478 sshd[1608]: Connection closed by 10.0.0.1 port 42706 Feb 13 19:03:24.819806 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:24.832192 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:42706.service: Deactivated successfully. Feb 13 19:03:24.833684 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:03:24.834362 systemd-logind[1469]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:03:24.846204 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:42722.service - OpenSSH per-connection server daemon (10.0.0.1:42722). Feb 13 19:03:24.847044 systemd-logind[1469]: Removed session 4. Feb 13 19:03:24.886718 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 42722 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:24.888162 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:24.892329 systemd-logind[1469]: New session 5 of user core. Feb 13 19:03:24.906112 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:03:24.978997 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:03:24.979282 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:24.994733 sudo[1617]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:24.996091 sshd[1616]: Connection closed by 10.0.0.1 port 42722 Feb 13 19:03:24.997298 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:25.008196 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:42722.service: Deactivated successfully. Feb 13 19:03:25.009684 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:03:25.010418 systemd-logind[1469]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:03:25.023247 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:42728.service - OpenSSH per-connection server daemon (10.0.0.1:42728). Feb 13 19:03:25.024187 systemd-logind[1469]: Removed session 5. Feb 13 19:03:25.060575 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 42728 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:25.061943 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:25.066285 systemd-logind[1469]: New session 6 of user core. Feb 13 19:03:25.086120 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:03:25.136202 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:03:25.136467 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:25.139297 sudo[1627]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:25.143723 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:03:25.143997 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:25.162335 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:03:25.184502 augenrules[1649]: No rules Feb 13 19:03:25.185572 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:03:25.186955 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:03:25.187961 sudo[1626]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:25.189760 sshd[1625]: Connection closed by 10.0.0.1 port 42728 Feb 13 19:03:25.189686 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:25.199850 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:42728.service: Deactivated successfully. Feb 13 19:03:25.201738 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:03:25.203947 systemd-logind[1469]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:03:25.205027 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:42742.service - OpenSSH per-connection server daemon (10.0.0.1:42742). Feb 13 19:03:25.206126 systemd-logind[1469]: Removed session 6. Feb 13 19:03:25.245932 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 42742 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:25.247115 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:25.251032 systemd-logind[1469]: New session 7 of user core. Feb 13 19:03:25.263086 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:03:25.312675 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:03:25.313313 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:25.333256 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:03:25.348064 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:03:25.348269 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:03:25.805183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:25.805327 systemd[1]: kubelet.service: Consumed 789ms CPU time, 249.5M memory peak. Feb 13 19:03:25.815173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:25.845862 systemd[1]: Reload requested from client PID 1703 ('systemctl') (unit session-7.scope)... Feb 13 19:03:25.845882 systemd[1]: Reloading... Feb 13 19:03:25.933944 zram_generator::config[1749]: No configuration found. Feb 13 19:03:26.121427 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:26.192850 systemd[1]: Reloading finished in 345 ms. Feb 13 19:03:26.231013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:26.232386 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:26.235955 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:03:26.236164 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:26.236201 systemd[1]: kubelet.service: Consumed 88ms CPU time, 90.2M memory peak. Feb 13 19:03:26.237947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:26.343214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:26.347232 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:03:26.383496 kubelet[1793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:03:26.383496 kubelet[1793]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:03:26.383496 kubelet[1793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:03:26.383822 kubelet[1793]: I0213 19:03:26.383468 1793 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:03:26.788594 kubelet[1793]: I0213 19:03:26.788480 1793 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:03:26.788594 kubelet[1793]: I0213 19:03:26.788516 1793 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:03:26.789412 kubelet[1793]: I0213 19:03:26.788925 1793 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:03:26.826159 kubelet[1793]: I0213 19:03:26.826106 1793 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:03:26.833965 kubelet[1793]: E0213 19:03:26.833869 1793 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:03:26.833965 kubelet[1793]: I0213 19:03:26.833965 1793 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:03:26.837305 kubelet[1793]: I0213 19:03:26.837279 1793 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:03:26.838732 kubelet[1793]: I0213 19:03:26.838670 1793 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:03:26.838963 kubelet[1793]: I0213 19:03:26.838723 1793 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.49","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:03:26.839056 kubelet[1793]: I0213 19:03:26.839020 1793 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:03:26.839056 kubelet[1793]: I0213 19:03:26.839030 1793 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:03:26.839255 kubelet[1793]: I0213 19:03:26.839222 1793 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:03:26.842626 kubelet[1793]: I0213 19:03:26.842598 1793 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:03:26.842626 kubelet[1793]: I0213 19:03:26.842627 1793 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:03:26.842705 kubelet[1793]: I0213 19:03:26.842647 1793 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:03:26.842705 kubelet[1793]: I0213 19:03:26.842659 1793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:03:26.842957 kubelet[1793]: E0213 19:03:26.842749 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:26.844087 kubelet[1793]: E0213 19:03:26.844041 1793 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:26.846469 kubelet[1793]: I0213 19:03:26.846449 1793 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:03:26.847334 kubelet[1793]: I0213 19:03:26.847258 1793 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:03:26.847403 kubelet[1793]: W0213 19:03:26.847391 1793 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:03:26.848886 kubelet[1793]: I0213 19:03:26.848837 1793 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:03:26.849782 kubelet[1793]: I0213 19:03:26.849009 1793 server.go:1287] "Started kubelet" Feb 13 19:03:26.850851 kubelet[1793]: I0213 19:03:26.850144 1793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:03:26.850851 kubelet[1793]: I0213 19:03:26.850496 1793 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:03:26.850851 kubelet[1793]: I0213 19:03:26.850560 1793 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:03:26.850851 kubelet[1793]: I0213 19:03:26.850755 1793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:03:26.851517 kubelet[1793]: I0213 19:03:26.851461 1793 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:03:26.853644 kubelet[1793]: W0213 19:03:26.852026 1793 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.49" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:03:26.853644 kubelet[1793]: E0213 19:03:26.852088 1793 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.49\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:03:26.853644 kubelet[1793]: W0213 19:03:26.852200 1793 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:03:26.853644 kubelet[1793]: E0213 19:03:26.852216 1793 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:03:26.853644 kubelet[1793]: I0213 19:03:26.853077 1793 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:03:26.854376 kubelet[1793]: E0213 19:03:26.854348 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:26.854519 kubelet[1793]: I0213 19:03:26.854508 1793 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:03:26.856603 kubelet[1793]: I0213 19:03:26.854949 1793 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:03:26.856603 kubelet[1793]: E0213 19:03:26.855554 1793 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:03:26.856603 kubelet[1793]: I0213 19:03:26.855593 1793 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:03:26.857189 kubelet[1793]: I0213 19:03:26.857163 1793 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:03:26.857325 kubelet[1793]: I0213 19:03:26.857303 1793 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:03:26.859238 kubelet[1793]: I0213 19:03:26.859208 1793 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:03:26.862352 kubelet[1793]: W0213 19:03:26.862301 1793 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:03:26.862421 kubelet[1793]: E0213 19:03:26.862355 1793 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:03:26.863910 kubelet[1793]: E0213 19:03:26.862396 1793 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.49.1823d9d8ef78a850 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.49,UID:10.0.0.49,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.49,},FirstTimestamp:2025-02-13 19:03:26.848968784 +0000 UTC m=+0.498551973,LastTimestamp:2025-02-13 19:03:26.848968784 +0000 UTC m=+0.498551973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.49,}" Feb 13 19:03:26.863910 kubelet[1793]: E0213 19:03:26.863118 1793 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.49\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:03:26.873519 kubelet[1793]: I0213 19:03:26.873482 1793 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:03:26.873519 kubelet[1793]: I0213 19:03:26.873510 1793 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:03:26.873646 kubelet[1793]: I0213 19:03:26.873531 1793 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:03:26.939067 kubelet[1793]: I0213 19:03:26.939029 1793 policy_none.go:49] "None policy: Start" Feb 13 19:03:26.939067 kubelet[1793]: I0213 19:03:26.939062 1793 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:03:26.939067 kubelet[1793]: I0213 19:03:26.939073 1793 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:03:26.944421 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:03:26.954595 kubelet[1793]: E0213 19:03:26.954556 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:26.955126 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:03:26.958536 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:03:26.963854 kubelet[1793]: I0213 19:03:26.963808 1793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:03:26.964888 kubelet[1793]: I0213 19:03:26.964863 1793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:03:26.964960 kubelet[1793]: I0213 19:03:26.964894 1793 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:03:26.964960 kubelet[1793]: I0213 19:03:26.964927 1793 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:03:26.964960 kubelet[1793]: I0213 19:03:26.964935 1793 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:03:26.965041 kubelet[1793]: E0213 19:03:26.964978 1793 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:03:26.967026 kubelet[1793]: I0213 19:03:26.966783 1793 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:03:26.967026 kubelet[1793]: I0213 19:03:26.967000 1793 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:03:26.967026 kubelet[1793]: I0213 19:03:26.967011 1793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:03:26.967837 kubelet[1793]: I0213 19:03:26.967615 1793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:03:26.968887 kubelet[1793]: E0213 19:03:26.968224 1793 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:03:26.968887 kubelet[1793]: E0213 19:03:26.968269 1793 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.49\" not found" Feb 13 19:03:27.068375 kubelet[1793]: I0213 19:03:27.068220 1793 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.49" Feb 13 19:03:27.068487 kubelet[1793]: E0213 19:03:27.068396 1793 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.49\" not found" node="10.0.0.49" Feb 13 19:03:27.075979 kubelet[1793]: I0213 19:03:27.075937 1793 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.49" Feb 13 19:03:27.075979 kubelet[1793]: E0213 19:03:27.075977 1793 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.49\": node \"10.0.0.49\" not found" Feb 13 19:03:27.079598 kubelet[1793]: I0213 19:03:27.079558 1793 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:03:27.082196 containerd[1483]: time="2025-02-13T19:03:27.082142462Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:03:27.082674 kubelet[1793]: I0213 19:03:27.082656 1793 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:03:27.092627 kubelet[1793]: E0213 19:03:27.092588 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:27.192944 kubelet[1793]: E0213 19:03:27.192893 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:27.293332 kubelet[1793]: E0213 19:03:27.293297 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:27.372759 sudo[1661]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:27.373881 sshd[1660]: Connection closed by 10.0.0.1 port 42742 Feb 13 19:03:27.374214 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:27.377633 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:42742.service: Deactivated successfully. Feb 13 19:03:27.379368 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:03:27.379582 systemd[1]: session-7.scope: Consumed 455ms CPU time, 76.5M memory peak. Feb 13 19:03:27.380508 systemd-logind[1469]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:03:27.381401 systemd-logind[1469]: Removed session 7. Feb 13 19:03:27.394411 kubelet[1793]: E0213 19:03:27.394360 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:27.494924 kubelet[1793]: E0213 19:03:27.494824 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:27.595328 kubelet[1793]: E0213 19:03:27.595256 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:27.695861 kubelet[1793]: E0213 19:03:27.695740 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:27.791081 kubelet[1793]: I0213 19:03:27.791037 1793 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:03:27.791225 kubelet[1793]: W0213 19:03:27.791184 1793 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:03:27.791225 kubelet[1793]: W0213 19:03:27.791184 1793 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:03:27.796196 kubelet[1793]: E0213 19:03:27.796169 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:27.843567 kubelet[1793]: E0213 19:03:27.843526 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:27.897215 kubelet[1793]: E0213 19:03:27.897165 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:27.998014 kubelet[1793]: E0213 19:03:27.997891 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:28.098348 kubelet[1793]: E0213 19:03:28.098311 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:28.198771 kubelet[1793]: E0213 19:03:28.198730 1793 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.49\" not found" Feb 13 19:03:28.844184 kubelet[1793]: E0213 19:03:28.844129 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:28.844184 kubelet[1793]: I0213 19:03:28.844160 1793 apiserver.go:52] "Watching apiserver" Feb 13 19:03:28.857127 systemd[1]: Created slice kubepods-burstable-podb71c1b64_3f7d_4b64_9c96_00b45bca90d4.slice - libcontainer container kubepods-burstable-podb71c1b64_3f7d_4b64_9c96_00b45bca90d4.slice. Feb 13 19:03:28.858168 kubelet[1793]: I0213 19:03:28.857363 1793 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:03:28.875287 kubelet[1793]: I0213 19:03:28.864946 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-hostproc\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875287 kubelet[1793]: I0213 19:03:28.864984 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-lib-modules\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875287 kubelet[1793]: I0213 19:03:28.865001 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-xtables-lock\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875287 kubelet[1793]: I0213 19:03:28.865016 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-host-proc-sys-net\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875287 kubelet[1793]: I0213 19:03:28.865031 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8e748ff-142a-4987-936a-467159798a3c-lib-modules\") pod \"kube-proxy-bzszk\" (UID: \"b8e748ff-142a-4987-936a-467159798a3c\") " pod="kube-system/kube-proxy-bzszk" Feb 13 19:03:28.875287 kubelet[1793]: I0213 19:03:28.865047 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-bpf-maps\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875547 kubelet[1793]: I0213 19:03:28.865062 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-cgroup\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875547 kubelet[1793]: I0213 19:03:28.865078 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cni-path\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875547 kubelet[1793]: I0213 19:03:28.865092 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-host-proc-sys-kernel\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875547 kubelet[1793]: I0213 19:03:28.865106 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-hubble-tls\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875547 kubelet[1793]: I0213 19:03:28.865120 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-config-path\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875547 kubelet[1793]: I0213 19:03:28.865134 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljb8h\" (UniqueName: \"kubernetes.io/projected/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-kube-api-access-ljb8h\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875663 kubelet[1793]: I0213 19:03:28.865150 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87z4d\" (UniqueName: \"kubernetes.io/projected/b8e748ff-142a-4987-936a-467159798a3c-kube-api-access-87z4d\") pod \"kube-proxy-bzszk\" (UID: \"b8e748ff-142a-4987-936a-467159798a3c\") " pod="kube-system/kube-proxy-bzszk" Feb 13 19:03:28.875663 kubelet[1793]: I0213 19:03:28.865164 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-run\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875663 kubelet[1793]: I0213 19:03:28.865177 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-etc-cni-netd\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875663 kubelet[1793]: I0213 19:03:28.865192 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-clustermesh-secrets\") pod \"cilium-jhl7b\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " pod="kube-system/cilium-jhl7b" Feb 13 19:03:28.875663 kubelet[1793]: I0213 19:03:28.865205 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8e748ff-142a-4987-936a-467159798a3c-kube-proxy\") pod \"kube-proxy-bzszk\" (UID: \"b8e748ff-142a-4987-936a-467159798a3c\") " pod="kube-system/kube-proxy-bzszk" Feb 13 19:03:28.875796 kubelet[1793]: I0213 19:03:28.865232 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8e748ff-142a-4987-936a-467159798a3c-xtables-lock\") pod \"kube-proxy-bzszk\" (UID: \"b8e748ff-142a-4987-936a-467159798a3c\") " pod="kube-system/kube-proxy-bzszk" Feb 13 19:03:28.898385 systemd[1]: Created slice kubepods-besteffort-podb8e748ff_142a_4987_936a_467159798a3c.slice - libcontainer container kubepods-besteffort-podb8e748ff_142a_4987_936a_467159798a3c.slice. Feb 13 19:03:29.198446 containerd[1483]: time="2025-02-13T19:03:29.198289585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jhl7b,Uid:b71c1b64-3f7d-4b64-9c96-00b45bca90d4,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:29.216688 containerd[1483]: time="2025-02-13T19:03:29.216337724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzszk,Uid:b8e748ff-142a-4987-936a-467159798a3c,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:29.675805 containerd[1483]: time="2025-02-13T19:03:29.675672017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:29.677106 containerd[1483]: time="2025-02-13T19:03:29.677066398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:03:29.679046 containerd[1483]: time="2025-02-13T19:03:29.678996455Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:29.682737 containerd[1483]: time="2025-02-13T19:03:29.682678194Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:29.683704 containerd[1483]: time="2025-02-13T19:03:29.683369409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:03:29.686994 containerd[1483]: time="2025-02-13T19:03:29.685215877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:29.687530 containerd[1483]: time="2025-02-13T19:03:29.687492593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 489.117952ms" Feb 13 19:03:29.688447 containerd[1483]: time="2025-02-13T19:03:29.688407806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 471.980816ms" Feb 13 19:03:29.796478 containerd[1483]: time="2025-02-13T19:03:29.796294547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:29.796478 containerd[1483]: time="2025-02-13T19:03:29.796368565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:29.796478 containerd[1483]: time="2025-02-13T19:03:29.796379882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:29.796705 containerd[1483]: time="2025-02-13T19:03:29.796653278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:29.799972 containerd[1483]: time="2025-02-13T19:03:29.799739859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:29.799972 containerd[1483]: time="2025-02-13T19:03:29.799796643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:29.799972 containerd[1483]: time="2025-02-13T19:03:29.799808238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:29.799972 containerd[1483]: time="2025-02-13T19:03:29.799918033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:29.844633 kubelet[1793]: E0213 19:03:29.844597 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:29.892179 systemd[1]: Started cri-containerd-03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d.scope - libcontainer container 03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d. Feb 13 19:03:29.895865 systemd[1]: Started cri-containerd-312d6be1e26b940ff26e8adbc41786eee51a17e948b6231d941f8c8b67f947f5.scope - libcontainer container 312d6be1e26b940ff26e8adbc41786eee51a17e948b6231d941f8c8b67f947f5. Feb 13 19:03:29.915498 containerd[1483]: time="2025-02-13T19:03:29.915460273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jhl7b,Uid:b71c1b64-3f7d-4b64-9c96-00b45bca90d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\"" Feb 13 19:03:29.916612 kubelet[1793]: E0213 19:03:29.916590 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:29.917839 containerd[1483]: time="2025-02-13T19:03:29.917771536Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:03:29.919870 containerd[1483]: time="2025-02-13T19:03:29.919840415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzszk,Uid:b8e748ff-142a-4987-936a-467159798a3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"312d6be1e26b940ff26e8adbc41786eee51a17e948b6231d941f8c8b67f947f5\"" Feb 13 19:03:29.920578 kubelet[1793]: E0213 19:03:29.920554 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:29.972505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302596184.mount: Deactivated successfully. Feb 13 19:03:30.845113 kubelet[1793]: E0213 19:03:30.845036 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:31.845424 kubelet[1793]: E0213 19:03:31.845325 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:32.265432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718156688.mount: Deactivated successfully. Feb 13 19:03:32.846147 kubelet[1793]: E0213 19:03:32.846098 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:33.846954 kubelet[1793]: E0213 19:03:33.846886 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:34.287382 containerd[1483]: time="2025-02-13T19:03:34.287178418Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:34.288290 containerd[1483]: time="2025-02-13T19:03:34.288208081Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:03:34.289487 containerd[1483]: time="2025-02-13T19:03:34.289430782Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:34.291194 containerd[1483]: time="2025-02-13T19:03:34.291074984Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.373239057s" Feb 13 19:03:34.291194 containerd[1483]: time="2025-02-13T19:03:34.291118820Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:03:34.295218 containerd[1483]: time="2025-02-13T19:03:34.295066753Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:03:34.297377 containerd[1483]: time="2025-02-13T19:03:34.297219411Z" level=info msg="CreateContainer within sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:03:34.309761 containerd[1483]: time="2025-02-13T19:03:34.309707917Z" level=info msg="CreateContainer within sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9\"" Feb 13 19:03:34.310401 containerd[1483]: time="2025-02-13T19:03:34.310367128Z" level=info msg="StartContainer for \"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9\"" Feb 13 19:03:34.342105 systemd[1]: Started cri-containerd-d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9.scope - libcontainer container d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9. Feb 13 19:03:34.366830 containerd[1483]: time="2025-02-13T19:03:34.366767170Z" level=info msg="StartContainer for \"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9\" returns successfully" Feb 13 19:03:34.402154 systemd[1]: cri-containerd-d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9.scope: Deactivated successfully. Feb 13 19:03:34.528756 containerd[1483]: time="2025-02-13T19:03:34.528700594Z" level=info msg="shim disconnected" id=d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9 namespace=k8s.io Feb 13 19:03:34.529164 containerd[1483]: time="2025-02-13T19:03:34.529008720Z" level=warning msg="cleaning up after shim disconnected" id=d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9 namespace=k8s.io Feb 13 19:03:34.529164 containerd[1483]: time="2025-02-13T19:03:34.529025816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:34.848051 kubelet[1793]: E0213 19:03:34.848010 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:34.988093 kubelet[1793]: E0213 19:03:34.987924 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:34.989846 containerd[1483]: time="2025-02-13T19:03:34.989805835Z" level=info msg="CreateContainer within sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:03:35.006396 containerd[1483]: time="2025-02-13T19:03:35.006267697Z" level=info msg="CreateContainer within sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070\"" Feb 13 19:03:35.006912 containerd[1483]: time="2025-02-13T19:03:35.006882443Z" level=info msg="StartContainer for \"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070\"" Feb 13 19:03:35.039139 systemd[1]: Started cri-containerd-7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070.scope - libcontainer container 7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070. Feb 13 19:03:35.069621 containerd[1483]: time="2025-02-13T19:03:35.063400510Z" level=info msg="StartContainer for \"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070\" returns successfully" Feb 13 19:03:35.086282 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:03:35.086628 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:35.086863 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:35.093387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:35.093633 systemd[1]: cri-containerd-7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070.scope: Deactivated successfully. Feb 13 19:03:35.104986 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:35.126506 containerd[1483]: time="2025-02-13T19:03:35.126241585Z" level=info msg="shim disconnected" id=7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070 namespace=k8s.io Feb 13 19:03:35.126506 containerd[1483]: time="2025-02-13T19:03:35.126297282Z" level=warning msg="cleaning up after shim disconnected" id=7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070 namespace=k8s.io Feb 13 19:03:35.126506 containerd[1483]: time="2025-02-13T19:03:35.126308486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:35.306092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9-rootfs.mount: Deactivated successfully. Feb 13 19:03:35.599163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142082277.mount: Deactivated successfully. Feb 13 19:03:35.848606 kubelet[1793]: E0213 19:03:35.848569 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:35.892481 containerd[1483]: time="2025-02-13T19:03:35.892340717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:35.893300 containerd[1483]: time="2025-02-13T19:03:35.893237898Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384" Feb 13 19:03:35.894434 containerd[1483]: time="2025-02-13T19:03:35.894389366Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:35.898755 containerd[1483]: time="2025-02-13T19:03:35.898680191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:35.899430 containerd[1483]: time="2025-02-13T19:03:35.899260171Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.604147066s" Feb 13 19:03:35.899430 containerd[1483]: time="2025-02-13T19:03:35.899299562Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 19:03:35.901438 containerd[1483]: time="2025-02-13T19:03:35.901411923Z" level=info msg="CreateContainer within sandbox \"312d6be1e26b940ff26e8adbc41786eee51a17e948b6231d941f8c8b67f947f5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:03:35.921557 containerd[1483]: time="2025-02-13T19:03:35.921497051Z" level=info msg="CreateContainer within sandbox \"312d6be1e26b940ff26e8adbc41786eee51a17e948b6231d941f8c8b67f947f5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b265e16612713112823fab447f3475e9a8210c524b402ba48df545a31fe4b3c7\"" Feb 13 19:03:35.922425 containerd[1483]: time="2025-02-13T19:03:35.922356237Z" level=info msg="StartContainer for \"b265e16612713112823fab447f3475e9a8210c524b402ba48df545a31fe4b3c7\"" Feb 13 19:03:35.958137 systemd[1]: Started cri-containerd-b265e16612713112823fab447f3475e9a8210c524b402ba48df545a31fe4b3c7.scope - libcontainer container b265e16612713112823fab447f3475e9a8210c524b402ba48df545a31fe4b3c7. Feb 13 19:03:35.996700 containerd[1483]: time="2025-02-13T19:03:35.996658888Z" level=info msg="StartContainer for \"b265e16612713112823fab447f3475e9a8210c524b402ba48df545a31fe4b3c7\" returns successfully" Feb 13 19:03:35.999839 kubelet[1793]: E0213 19:03:35.999751 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:36.002499 kubelet[1793]: E0213 19:03:36.002457 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:36.004925 containerd[1483]: time="2025-02-13T19:03:36.004874685Z" level=info msg="CreateContainer within sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:03:36.009976 kubelet[1793]: I0213 19:03:36.009868 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bzszk" podStartSLOduration=3.031177213 podStartE2EDuration="9.009850789s" podCreationTimestamp="2025-02-13 19:03:27 +0000 UTC" firstStartedPulling="2025-02-13 19:03:29.921265491 +0000 UTC m=+3.570848680" lastFinishedPulling="2025-02-13 19:03:35.899939068 +0000 UTC m=+9.549522256" observedRunningTime="2025-02-13 19:03:36.009819159 +0000 UTC m=+9.659402348" watchObservedRunningTime="2025-02-13 19:03:36.009850789 +0000 UTC m=+9.659433978" Feb 13 19:03:36.032987 containerd[1483]: time="2025-02-13T19:03:36.031146217Z" level=info msg="CreateContainer within sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e\"" Feb 13 19:03:36.033888 containerd[1483]: time="2025-02-13T19:03:36.033838424Z" level=info msg="StartContainer for \"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e\"" Feb 13 19:03:36.074140 systemd[1]: Started cri-containerd-961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e.scope - libcontainer container 961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e. Feb 13 19:03:36.111497 containerd[1483]: time="2025-02-13T19:03:36.111452817Z" level=info msg="StartContainer for \"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e\" returns successfully" Feb 13 19:03:36.132720 systemd[1]: cri-containerd-961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e.scope: Deactivated successfully. Feb 13 19:03:36.329381 containerd[1483]: time="2025-02-13T19:03:36.329221626Z" level=info msg="shim disconnected" id=961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e namespace=k8s.io Feb 13 19:03:36.329381 containerd[1483]: time="2025-02-13T19:03:36.329286959Z" level=warning msg="cleaning up after shim disconnected" id=961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e namespace=k8s.io Feb 13 19:03:36.329381 containerd[1483]: time="2025-02-13T19:03:36.329300441Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:36.849723 kubelet[1793]: E0213 19:03:36.849668 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:37.006034 kubelet[1793]: E0213 19:03:37.006002 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:37.006167 kubelet[1793]: E0213 19:03:37.006089 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:37.008512 containerd[1483]: time="2025-02-13T19:03:37.008383983Z" level=info msg="CreateContainer within sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:03:37.024440 containerd[1483]: time="2025-02-13T19:03:37.024286332Z" level=info msg="CreateContainer within sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce\"" Feb 13 19:03:37.025129 containerd[1483]: time="2025-02-13T19:03:37.025053968Z" level=info msg="StartContainer for \"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce\"" Feb 13 19:03:37.054152 systemd[1]: Started cri-containerd-7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce.scope - libcontainer container 7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce. Feb 13 19:03:37.076264 systemd[1]: cri-containerd-7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce.scope: Deactivated successfully. Feb 13 19:03:37.079673 containerd[1483]: time="2025-02-13T19:03:37.079624575Z" level=info msg="StartContainer for \"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce\" returns successfully" Feb 13 19:03:37.098654 containerd[1483]: time="2025-02-13T19:03:37.098600023Z" level=info msg="shim disconnected" id=7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce namespace=k8s.io Feb 13 19:03:37.099103 containerd[1483]: time="2025-02-13T19:03:37.098892330Z" level=warning msg="cleaning up after shim disconnected" id=7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce namespace=k8s.io Feb 13 19:03:37.099103 containerd[1483]: time="2025-02-13T19:03:37.098937457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:37.305445 systemd[1]: run-containerd-runc-k8s.io-7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce-runc.J0peW0.mount: Deactivated successfully. Feb 13 19:03:37.305560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce-rootfs.mount: Deactivated successfully. Feb 13 19:03:37.850732 kubelet[1793]: E0213 19:03:37.850686 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:38.009704 kubelet[1793]: E0213 19:03:38.009589 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:38.011895 containerd[1483]: time="2025-02-13T19:03:38.011843243Z" level=info msg="CreateContainer within sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:03:38.118389 containerd[1483]: time="2025-02-13T19:03:38.118241731Z" level=info msg="CreateContainer within sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\"" Feb 13 19:03:38.119323 containerd[1483]: time="2025-02-13T19:03:38.119258143Z" level=info msg="StartContainer for \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\"" Feb 13 19:03:38.146112 systemd[1]: Started cri-containerd-7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0.scope - libcontainer container 7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0. Feb 13 19:03:38.169139 containerd[1483]: time="2025-02-13T19:03:38.169091785Z" level=info msg="StartContainer for \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\" returns successfully" Feb 13 19:03:38.287988 kubelet[1793]: I0213 19:03:38.287892 1793 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:03:38.801950 kernel: Initializing XFRM netlink socket Feb 13 19:03:38.851495 kubelet[1793]: E0213 19:03:38.851428 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:39.013819 kubelet[1793]: E0213 19:03:39.013294 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:39.852269 kubelet[1793]: E0213 19:03:39.852222 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:40.015309 kubelet[1793]: E0213 19:03:40.015273 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:40.429119 systemd-networkd[1400]: cilium_host: Link UP Feb 13 19:03:40.429296 systemd-networkd[1400]: cilium_net: Link UP Feb 13 19:03:40.429429 systemd-networkd[1400]: cilium_net: Gained carrier Feb 13 19:03:40.429565 systemd-networkd[1400]: cilium_host: Gained carrier Feb 13 19:03:40.439614 systemd-networkd[1400]: cilium_host: Gained IPv6LL Feb 13 19:03:40.511652 systemd-networkd[1400]: cilium_vxlan: Link UP Feb 13 19:03:40.511663 systemd-networkd[1400]: cilium_vxlan: Gained carrier Feb 13 19:03:40.811939 kernel: NET: Registered PF_ALG protocol family Feb 13 19:03:40.853088 kubelet[1793]: E0213 19:03:40.853050 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:41.010054 systemd-networkd[1400]: cilium_net: Gained IPv6LL Feb 13 19:03:41.016521 kubelet[1793]: E0213 19:03:41.016199 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:41.399520 systemd-networkd[1400]: lxc_health: Link UP Feb 13 19:03:41.414173 systemd-networkd[1400]: lxc_health: Gained carrier Feb 13 19:03:41.853699 kubelet[1793]: E0213 19:03:41.853659 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:42.097056 systemd-networkd[1400]: cilium_vxlan: Gained IPv6LL Feb 13 19:03:42.801060 systemd-networkd[1400]: lxc_health: Gained IPv6LL Feb 13 19:03:42.854079 kubelet[1793]: E0213 19:03:42.854024 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:43.200099 kubelet[1793]: E0213 19:03:43.200056 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:43.219927 kubelet[1793]: I0213 19:03:43.218121 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jhl7b" podStartSLOduration=11.840586331 podStartE2EDuration="16.218103449s" podCreationTimestamp="2025-02-13 19:03:27 +0000 UTC" firstStartedPulling="2025-02-13 19:03:29.917238959 +0000 UTC m=+3.566822148" lastFinishedPulling="2025-02-13 19:03:34.294756077 +0000 UTC m=+7.944339266" observedRunningTime="2025-02-13 19:03:39.03100488 +0000 UTC m=+12.680588069" watchObservedRunningTime="2025-02-13 19:03:43.218103449 +0000 UTC m=+16.867686638" Feb 13 19:03:43.376287 systemd[1]: Created slice kubepods-besteffort-podbfb75388_e11a_4057_afb1_c09a000fa339.slice - libcontainer container kubepods-besteffort-podbfb75388_e11a_4057_afb1_c09a000fa339.slice. Feb 13 19:03:43.455655 kubelet[1793]: I0213 19:03:43.455537 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz9tq\" (UniqueName: \"kubernetes.io/projected/bfb75388-e11a-4057-afb1-c09a000fa339-kube-api-access-vz9tq\") pod \"nginx-deployment-7fcdb87857-zjw7k\" (UID: \"bfb75388-e11a-4057-afb1-c09a000fa339\") " pod="default/nginx-deployment-7fcdb87857-zjw7k" Feb 13 19:03:43.680241 containerd[1483]: time="2025-02-13T19:03:43.679812332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zjw7k,Uid:bfb75388-e11a-4057-afb1-c09a000fa339,Namespace:default,Attempt:0,}" Feb 13 19:03:43.764925 systemd-networkd[1400]: lxc86f06b7291ec: Link UP Feb 13 19:03:43.766952 kernel: eth0: renamed from tmpd5bbc Feb 13 19:03:43.774765 systemd-networkd[1400]: lxc86f06b7291ec: Gained carrier Feb 13 19:03:43.854733 kubelet[1793]: E0213 19:03:43.854641 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:44.855813 kubelet[1793]: E0213 19:03:44.855734 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:45.297094 systemd-networkd[1400]: lxc86f06b7291ec: Gained IPv6LL Feb 13 19:03:45.855919 kubelet[1793]: E0213 19:03:45.855845 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:45.900896 containerd[1483]: time="2025-02-13T19:03:45.900810607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:45.900896 containerd[1483]: time="2025-02-13T19:03:45.900860724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:45.900896 containerd[1483]: time="2025-02-13T19:03:45.900872714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:45.901462 containerd[1483]: time="2025-02-13T19:03:45.900958360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:45.931115 systemd[1]: Started cri-containerd-d5bbc526741c4751f367a1432de75d4c8c5d4db89ae33df9443b27ab46d20ff5.scope - libcontainer container d5bbc526741c4751f367a1432de75d4c8c5d4db89ae33df9443b27ab46d20ff5. Feb 13 19:03:45.941672 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:03:45.960148 containerd[1483]: time="2025-02-13T19:03:45.960111908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zjw7k,Uid:bfb75388-e11a-4057-afb1-c09a000fa339,Namespace:default,Attempt:0,} returns sandbox id \"d5bbc526741c4751f367a1432de75d4c8c5d4db89ae33df9443b27ab46d20ff5\"" Feb 13 19:03:45.961756 containerd[1483]: time="2025-02-13T19:03:45.961714490Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:03:46.846297 kubelet[1793]: E0213 19:03:46.846245 1793 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:46.856607 kubelet[1793]: E0213 19:03:46.856549 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:47.715211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118283075.mount: Deactivated successfully. Feb 13 19:03:47.857341 kubelet[1793]: E0213 19:03:47.857226 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:48.857620 kubelet[1793]: E0213 19:03:48.857562 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:48.879381 containerd[1483]: time="2025-02-13T19:03:48.879291654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:48.880042 containerd[1483]: time="2025-02-13T19:03:48.879988853Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:03:48.880662 containerd[1483]: time="2025-02-13T19:03:48.880617731Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:48.884548 containerd[1483]: time="2025-02-13T19:03:48.884493140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:48.885371 containerd[1483]: time="2025-02-13T19:03:48.885258139Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 2.923497685s" Feb 13 19:03:48.885371 containerd[1483]: time="2025-02-13T19:03:48.885301554Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:03:48.887851 containerd[1483]: time="2025-02-13T19:03:48.887810150Z" level=info msg="CreateContainer within sandbox \"d5bbc526741c4751f367a1432de75d4c8c5d4db89ae33df9443b27ab46d20ff5\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:03:48.898903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110912447.mount: Deactivated successfully. Feb 13 19:03:48.899504 containerd[1483]: time="2025-02-13T19:03:48.899435617Z" level=info msg="CreateContainer within sandbox \"d5bbc526741c4751f367a1432de75d4c8c5d4db89ae33df9443b27ab46d20ff5\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"51324e81a9394c61bc1b200b4acea61f277560774e1abd6a1db1bb90794ec8a4\"" Feb 13 19:03:48.900423 containerd[1483]: time="2025-02-13T19:03:48.900382032Z" level=info msg="StartContainer for \"51324e81a9394c61bc1b200b4acea61f277560774e1abd6a1db1bb90794ec8a4\"" Feb 13 19:03:48.939622 systemd[1]: Started cri-containerd-51324e81a9394c61bc1b200b4acea61f277560774e1abd6a1db1bb90794ec8a4.scope - libcontainer container 51324e81a9394c61bc1b200b4acea61f277560774e1abd6a1db1bb90794ec8a4. Feb 13 19:03:48.964618 containerd[1483]: time="2025-02-13T19:03:48.964575393Z" level=info msg="StartContainer for \"51324e81a9394c61bc1b200b4acea61f277560774e1abd6a1db1bb90794ec8a4\" returns successfully" Feb 13 19:03:49.051281 kubelet[1793]: I0213 19:03:49.051201 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-zjw7k" podStartSLOduration=3.125965482 podStartE2EDuration="6.051183588s" podCreationTimestamp="2025-02-13 19:03:43 +0000 UTC" firstStartedPulling="2025-02-13 19:03:45.96122663 +0000 UTC m=+19.610809819" lastFinishedPulling="2025-02-13 19:03:48.886444736 +0000 UTC m=+22.536027925" observedRunningTime="2025-02-13 19:03:49.050895653 +0000 UTC m=+22.700478842" watchObservedRunningTime="2025-02-13 19:03:49.051183588 +0000 UTC m=+22.700766777" Feb 13 19:03:49.858974 kubelet[1793]: E0213 19:03:49.858902 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:50.859162 kubelet[1793]: E0213 19:03:50.859112 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:51.859634 kubelet[1793]: E0213 19:03:51.859586 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:52.708363 kubelet[1793]: I0213 19:03:52.708322 1793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:03:52.708862 kubelet[1793]: E0213 19:03:52.708738 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:52.860194 kubelet[1793]: E0213 19:03:52.860114 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:53.046002 kubelet[1793]: E0213 19:03:53.045780 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:53.861286 kubelet[1793]: E0213 19:03:53.861231 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:54.862350 kubelet[1793]: E0213 19:03:54.862307 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:55.188030 systemd[1]: Created slice kubepods-besteffort-pod796adb91_2dd5_4057_a41f_088d6da15281.slice - libcontainer container kubepods-besteffort-pod796adb91_2dd5_4057_a41f_088d6da15281.slice. Feb 13 19:03:55.226541 kubelet[1793]: I0213 19:03:55.226496 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4kph\" (UniqueName: \"kubernetes.io/projected/796adb91-2dd5-4057-a41f-088d6da15281-kube-api-access-v4kph\") pod \"nfs-server-provisioner-0\" (UID: \"796adb91-2dd5-4057-a41f-088d6da15281\") " pod="default/nfs-server-provisioner-0" Feb 13 19:03:55.226541 kubelet[1793]: I0213 19:03:55.226542 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/796adb91-2dd5-4057-a41f-088d6da15281-data\") pod \"nfs-server-provisioner-0\" (UID: \"796adb91-2dd5-4057-a41f-088d6da15281\") " pod="default/nfs-server-provisioner-0" Feb 13 19:03:55.491211 containerd[1483]: time="2025-02-13T19:03:55.491083958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:796adb91-2dd5-4057-a41f-088d6da15281,Namespace:default,Attempt:0,}" Feb 13 19:03:55.533948 kernel: eth0: renamed from tmp992dd Feb 13 19:03:55.539507 systemd-networkd[1400]: lxcdf95f45f3b3e: Link UP Feb 13 19:03:55.540952 systemd-networkd[1400]: lxcdf95f45f3b3e: Gained carrier Feb 13 19:03:55.747082 containerd[1483]: time="2025-02-13T19:03:55.746771953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:55.747082 containerd[1483]: time="2025-02-13T19:03:55.746838320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:55.747082 containerd[1483]: time="2025-02-13T19:03:55.746850122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:55.747082 containerd[1483]: time="2025-02-13T19:03:55.746969334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:55.770126 systemd[1]: Started cri-containerd-992dde6dce42f0b9e91d87aa2c847decca0389913886dc1847835a3ba5d23964.scope - libcontainer container 992dde6dce42f0b9e91d87aa2c847decca0389913886dc1847835a3ba5d23964. Feb 13 19:03:55.781424 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:03:55.797965 containerd[1483]: time="2025-02-13T19:03:55.797891654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:796adb91-2dd5-4057-a41f-088d6da15281,Namespace:default,Attempt:0,} returns sandbox id \"992dde6dce42f0b9e91d87aa2c847decca0389913886dc1847835a3ba5d23964\"" Feb 13 19:03:55.799650 containerd[1483]: time="2025-02-13T19:03:55.799421818Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:03:55.862979 kubelet[1793]: E0213 19:03:55.862932 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:56.753102 systemd-networkd[1400]: lxcdf95f45f3b3e: Gained IPv6LL Feb 13 19:03:56.863197 kubelet[1793]: E0213 19:03:56.863149 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:57.405170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403535805.mount: Deactivated successfully. Feb 13 19:03:57.864367 kubelet[1793]: E0213 19:03:57.864264 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:58.864766 kubelet[1793]: E0213 19:03:58.864728 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:58.896678 containerd[1483]: time="2025-02-13T19:03:58.896201920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:58.896678 containerd[1483]: time="2025-02-13T19:03:58.896626278Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Feb 13 19:03:58.897655 containerd[1483]: time="2025-02-13T19:03:58.897605327Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:58.900521 containerd[1483]: time="2025-02-13T19:03:58.900475707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:58.902536 containerd[1483]: time="2025-02-13T19:03:58.901623611Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.102161029s" Feb 13 19:03:58.902536 containerd[1483]: time="2025-02-13T19:03:58.901660255Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:03:58.904132 containerd[1483]: time="2025-02-13T19:03:58.904073593Z" level=info msg="CreateContainer within sandbox \"992dde6dce42f0b9e91d87aa2c847decca0389913886dc1847835a3ba5d23964\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:03:58.956123 containerd[1483]: time="2025-02-13T19:03:58.956067626Z" level=info msg="CreateContainer within sandbox \"992dde6dce42f0b9e91d87aa2c847decca0389913886dc1847835a3ba5d23964\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3cc6eece6831a56dbbdc27c036ee4e79dbc69a5c8c67bca5b5cf31a4aa07f0dd\"" Feb 13 19:03:58.956783 containerd[1483]: time="2025-02-13T19:03:58.956746608Z" level=info msg="StartContainer for \"3cc6eece6831a56dbbdc27c036ee4e79dbc69a5c8c67bca5b5cf31a4aa07f0dd\"" Feb 13 19:03:59.030095 systemd[1]: Started cri-containerd-3cc6eece6831a56dbbdc27c036ee4e79dbc69a5c8c67bca5b5cf31a4aa07f0dd.scope - libcontainer container 3cc6eece6831a56dbbdc27c036ee4e79dbc69a5c8c67bca5b5cf31a4aa07f0dd. Feb 13 19:03:59.092062 containerd[1483]: time="2025-02-13T19:03:59.092010798Z" level=info msg="StartContainer for \"3cc6eece6831a56dbbdc27c036ee4e79dbc69a5c8c67bca5b5cf31a4aa07f0dd\" returns successfully" Feb 13 19:03:59.865850 kubelet[1793]: E0213 19:03:59.865797 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:00.868493 kubelet[1793]: E0213 19:04:00.868421 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:01.868592 kubelet[1793]: E0213 19:04:01.868525 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:02.869033 kubelet[1793]: E0213 19:04:02.868970 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:03.067937 update_engine[1472]: I20250213 19:04:03.067832 1472 update_attempter.cc:509] Updating boot flags... Feb 13 19:04:03.092309 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3181) Feb 13 19:04:03.131097 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3179) Feb 13 19:04:03.162956 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3179) Feb 13 19:04:03.869812 kubelet[1793]: E0213 19:04:03.869738 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:04.870397 kubelet[1793]: E0213 19:04:04.870311 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:05.870949 kubelet[1793]: E0213 19:04:05.870873 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:06.843791 kubelet[1793]: E0213 19:04:06.843743 1793 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:06.871271 kubelet[1793]: E0213 19:04:06.871230 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:07.872035 kubelet[1793]: E0213 19:04:07.871998 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:08.872609 kubelet[1793]: E0213 19:04:08.872559 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:09.336814 kubelet[1793]: I0213 19:04:09.336757 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.23325694 podStartE2EDuration="14.336740454s" podCreationTimestamp="2025-02-13 19:03:55 +0000 UTC" firstStartedPulling="2025-02-13 19:03:55.799151909 +0000 UTC m=+29.448735098" lastFinishedPulling="2025-02-13 19:03:58.902635423 +0000 UTC m=+32.552218612" observedRunningTime="2025-02-13 19:04:00.074561993 +0000 UTC m=+33.724145182" watchObservedRunningTime="2025-02-13 19:04:09.336740454 +0000 UTC m=+42.986323603" Feb 13 19:04:09.341633 systemd[1]: Created slice kubepods-besteffort-pod9ff02293_a386_477e_92a7_362f773a0af8.slice - libcontainer container kubepods-besteffort-pod9ff02293_a386_477e_92a7_362f773a0af8.slice. Feb 13 19:04:09.505375 kubelet[1793]: I0213 19:04:09.505325 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8bc1a005-bd40-46c6-872c-b8a72ac9c5fc\" (UniqueName: \"kubernetes.io/nfs/9ff02293-a386-477e-92a7-362f773a0af8-pvc-8bc1a005-bd40-46c6-872c-b8a72ac9c5fc\") pod \"test-pod-1\" (UID: \"9ff02293-a386-477e-92a7-362f773a0af8\") " pod="default/test-pod-1" Feb 13 19:04:09.505375 kubelet[1793]: I0213 19:04:09.505380 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsv28\" (UniqueName: \"kubernetes.io/projected/9ff02293-a386-477e-92a7-362f773a0af8-kube-api-access-rsv28\") pod \"test-pod-1\" (UID: \"9ff02293-a386-477e-92a7-362f773a0af8\") " pod="default/test-pod-1" Feb 13 19:04:09.638944 kernel: FS-Cache: Loaded Feb 13 19:04:09.667121 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:04:09.667250 kernel: RPC: Registered udp transport module. Feb 13 19:04:09.667268 kernel: RPC: Registered tcp transport module. Feb 13 19:04:09.667284 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:04:09.668721 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:04:09.837519 kernel: NFS: Registering the id_resolver key type Feb 13 19:04:09.837663 kernel: Key type id_resolver registered Feb 13 19:04:09.837682 kernel: Key type id_legacy registered Feb 13 19:04:09.867620 nfsidmap[3211]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:04:09.874180 kubelet[1793]: E0213 19:04:09.874111 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:09.874444 nfsidmap[3214]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:04:09.945021 containerd[1483]: time="2025-02-13T19:04:09.944748592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9ff02293-a386-477e-92a7-362f773a0af8,Namespace:default,Attempt:0,}" Feb 13 19:04:09.996956 kernel: eth0: renamed from tmpef1af Feb 13 19:04:10.007150 systemd-networkd[1400]: lxc6da774b617ff: Link UP Feb 13 19:04:10.007549 systemd-networkd[1400]: lxc6da774b617ff: Gained carrier Feb 13 19:04:10.156653 containerd[1483]: time="2025-02-13T19:04:10.156522870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:10.156653 containerd[1483]: time="2025-02-13T19:04:10.156581793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:10.156653 containerd[1483]: time="2025-02-13T19:04:10.156596994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:10.156877 containerd[1483]: time="2025-02-13T19:04:10.156678798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:10.179141 systemd[1]: Started cri-containerd-ef1afecf4853c4af85c7938599a928954bb4a44fabd5a040899c5d8cef216eec.scope - libcontainer container ef1afecf4853c4af85c7938599a928954bb4a44fabd5a040899c5d8cef216eec. Feb 13 19:04:10.190598 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:04:10.207400 containerd[1483]: time="2025-02-13T19:04:10.207278838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9ff02293-a386-477e-92a7-362f773a0af8,Namespace:default,Attempt:0,} returns sandbox id \"ef1afecf4853c4af85c7938599a928954bb4a44fabd5a040899c5d8cef216eec\"" Feb 13 19:04:10.209489 containerd[1483]: time="2025-02-13T19:04:10.209439386Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:04:10.543022 containerd[1483]: time="2025-02-13T19:04:10.542870152Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:10.543604 containerd[1483]: time="2025-02-13T19:04:10.543553386Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:04:10.547113 containerd[1483]: time="2025-02-13T19:04:10.547080082Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 337.581134ms" Feb 13 19:04:10.547180 containerd[1483]: time="2025-02-13T19:04:10.547118804Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:04:10.549474 containerd[1483]: time="2025-02-13T19:04:10.549439800Z" level=info msg="CreateContainer within sandbox \"ef1afecf4853c4af85c7938599a928954bb4a44fabd5a040899c5d8cef216eec\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:04:10.565391 containerd[1483]: time="2025-02-13T19:04:10.565346232Z" level=info msg="CreateContainer within sandbox \"ef1afecf4853c4af85c7938599a928954bb4a44fabd5a040899c5d8cef216eec\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e92736194b3058f3135ac8d5cfae798f985cba07c59515e0b4c00d48280661fe\"" Feb 13 19:04:10.566146 containerd[1483]: time="2025-02-13T19:04:10.566066708Z" level=info msg="StartContainer for \"e92736194b3058f3135ac8d5cfae798f985cba07c59515e0b4c00d48280661fe\"" Feb 13 19:04:10.602400 systemd[1]: Started cri-containerd-e92736194b3058f3135ac8d5cfae798f985cba07c59515e0b4c00d48280661fe.scope - libcontainer container e92736194b3058f3135ac8d5cfae798f985cba07c59515e0b4c00d48280661fe. Feb 13 19:04:10.634066 containerd[1483]: time="2025-02-13T19:04:10.634013932Z" level=info msg="StartContainer for \"e92736194b3058f3135ac8d5cfae798f985cba07c59515e0b4c00d48280661fe\" returns successfully" Feb 13 19:04:10.874700 kubelet[1793]: E0213 19:04:10.874643 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:11.095088 kubelet[1793]: I0213 19:04:11.095014 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.755918359 podStartE2EDuration="16.094996967s" podCreationTimestamp="2025-02-13 19:03:55 +0000 UTC" firstStartedPulling="2025-02-13 19:04:10.208875398 +0000 UTC m=+43.858458587" lastFinishedPulling="2025-02-13 19:04:10.547954006 +0000 UTC m=+44.197537195" observedRunningTime="2025-02-13 19:04:11.09484884 +0000 UTC m=+44.744432029" watchObservedRunningTime="2025-02-13 19:04:11.094996967 +0000 UTC m=+44.744580156" Feb 13 19:04:11.281119 systemd-networkd[1400]: lxc6da774b617ff: Gained IPv6LL Feb 13 19:04:11.875258 kubelet[1793]: E0213 19:04:11.875203 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:12.875823 kubelet[1793]: E0213 19:04:12.875771 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:13.527656 systemd[1]: run-containerd-runc-k8s.io-7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0-runc.2yovRX.mount: Deactivated successfully. Feb 13 19:04:13.551846 containerd[1483]: time="2025-02-13T19:04:13.551792671Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:04:13.557525 containerd[1483]: time="2025-02-13T19:04:13.557468799Z" level=info msg="StopContainer for \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\" with timeout 2 (s)" Feb 13 19:04:13.557959 containerd[1483]: time="2025-02-13T19:04:13.557936939Z" level=info msg="Stop container \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\" with signal terminated" Feb 13 19:04:13.563553 systemd-networkd[1400]: lxc_health: Link DOWN Feb 13 19:04:13.563560 systemd-networkd[1400]: lxc_health: Lost carrier Feb 13 19:04:13.578608 systemd[1]: cri-containerd-7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0.scope: Deactivated successfully. Feb 13 19:04:13.578944 systemd[1]: cri-containerd-7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0.scope: Consumed 6.670s CPU time, 123.4M memory peak, 220K read from disk, 12.9M written to disk. Feb 13 19:04:13.604110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0-rootfs.mount: Deactivated successfully. Feb 13 19:04:13.614970 containerd[1483]: time="2025-02-13T19:04:13.614858984Z" level=info msg="shim disconnected" id=7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0 namespace=k8s.io Feb 13 19:04:13.614970 containerd[1483]: time="2025-02-13T19:04:13.614932147Z" level=warning msg="cleaning up after shim disconnected" id=7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0 namespace=k8s.io Feb 13 19:04:13.614970 containerd[1483]: time="2025-02-13T19:04:13.614941148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:13.628884 containerd[1483]: time="2025-02-13T19:04:13.628470618Z" level=info msg="StopContainer for \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\" returns successfully" Feb 13 19:04:13.629400 containerd[1483]: time="2025-02-13T19:04:13.629360177Z" level=info msg="StopPodSandbox for \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\"" Feb 13 19:04:13.632886 containerd[1483]: time="2025-02-13T19:04:13.632828569Z" level=info msg="Container to stop \"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:13.632886 containerd[1483]: time="2025-02-13T19:04:13.632874371Z" level=info msg="Container to stop \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:13.632886 containerd[1483]: time="2025-02-13T19:04:13.632885571Z" level=info msg="Container to stop \"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:13.632886 containerd[1483]: time="2025-02-13T19:04:13.632896012Z" level=info msg="Container to stop \"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:13.633085 containerd[1483]: time="2025-02-13T19:04:13.632919973Z" level=info msg="Container to stop \"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:13.634880 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d-shm.mount: Deactivated successfully. Feb 13 19:04:13.639708 systemd[1]: cri-containerd-03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d.scope: Deactivated successfully. Feb 13 19:04:13.669473 containerd[1483]: time="2025-02-13T19:04:13.669361204Z" level=info msg="shim disconnected" id=03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d namespace=k8s.io Feb 13 19:04:13.669473 containerd[1483]: time="2025-02-13T19:04:13.669449607Z" level=warning msg="cleaning up after shim disconnected" id=03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d namespace=k8s.io Feb 13 19:04:13.669970 containerd[1483]: time="2025-02-13T19:04:13.669705939Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:13.682920 containerd[1483]: time="2025-02-13T19:04:13.682854273Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:04:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:04:13.693266 containerd[1483]: time="2025-02-13T19:04:13.692863470Z" level=info msg="TearDown network for sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" successfully" Feb 13 19:04:13.693266 containerd[1483]: time="2025-02-13T19:04:13.692911872Z" level=info msg="StopPodSandbox for \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" returns successfully" Feb 13 19:04:13.832084 kubelet[1793]: I0213 19:04:13.832020 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cni-path\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832084 kubelet[1793]: I0213 19:04:13.832070 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-host-proc-sys-kernel\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832243 kubelet[1793]: I0213 19:04:13.832097 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-config-path\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832243 kubelet[1793]: I0213 19:04:13.832114 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-etc-cni-netd\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832243 kubelet[1793]: I0213 19:04:13.832133 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-hostproc\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832243 kubelet[1793]: I0213 19:04:13.832135 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cni-path" (OuterVolumeSpecName: "cni-path") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.832243 kubelet[1793]: I0213 19:04:13.832148 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-lib-modules\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832243 kubelet[1793]: I0213 19:04:13.832204 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-cgroup\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832375 kubelet[1793]: I0213 19:04:13.832228 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-clustermesh-secrets\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832375 kubelet[1793]: I0213 19:04:13.832249 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljb8h\" (UniqueName: \"kubernetes.io/projected/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-kube-api-access-ljb8h\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832375 kubelet[1793]: I0213 19:04:13.832267 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-xtables-lock\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832375 kubelet[1793]: I0213 19:04:13.832283 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-hubble-tls\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832375 kubelet[1793]: I0213 19:04:13.832298 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-run\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832375 kubelet[1793]: I0213 19:04:13.832312 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-host-proc-sys-net\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832492 kubelet[1793]: I0213 19:04:13.832327 1793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-bpf-maps\") pod \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\" (UID: \"b71c1b64-3f7d-4b64-9c96-00b45bca90d4\") " Feb 13 19:04:13.832492 kubelet[1793]: I0213 19:04:13.832358 1793 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cni-path\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.832492 kubelet[1793]: I0213 19:04:13.832174 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.832492 kubelet[1793]: I0213 19:04:13.832186 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.832492 kubelet[1793]: I0213 19:04:13.832378 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.832664 kubelet[1793]: I0213 19:04:13.832403 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.832664 kubelet[1793]: I0213 19:04:13.832460 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.832664 kubelet[1793]: I0213 19:04:13.832492 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-hostproc" (OuterVolumeSpecName: "hostproc") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.836085 kubelet[1793]: I0213 19:04:13.835962 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.836473 kubelet[1793]: I0213 19:04:13.836158 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.836473 kubelet[1793]: I0213 19:04:13.836179 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:13.838710 kubelet[1793]: I0213 19:04:13.838656 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:04:13.844686 kubelet[1793]: I0213 19:04:13.844621 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:04:13.844861 kubelet[1793]: I0213 19:04:13.844775 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-kube-api-access-ljb8h" (OuterVolumeSpecName: "kube-api-access-ljb8h") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "kube-api-access-ljb8h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:04:13.845198 kubelet[1793]: I0213 19:04:13.845163 1793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b71c1b64-3f7d-4b64-9c96-00b45bca90d4" (UID: "b71c1b64-3f7d-4b64-9c96-00b45bca90d4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:04:13.876534 kubelet[1793]: E0213 19:04:13.876447 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:13.932839 kubelet[1793]: I0213 19:04:13.932767 1793 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-xtables-lock\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.932839 kubelet[1793]: I0213 19:04:13.932799 1793 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-hubble-tls\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.932839 kubelet[1793]: I0213 19:04:13.932809 1793 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ljb8h\" (UniqueName: \"kubernetes.io/projected/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-kube-api-access-ljb8h\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.932839 kubelet[1793]: I0213 19:04:13.932820 1793 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-host-proc-sys-net\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.932839 kubelet[1793]: I0213 19:04:13.932828 1793 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-bpf-maps\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.932839 kubelet[1793]: I0213 19:04:13.932836 1793 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-run\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.932839 kubelet[1793]: I0213 19:04:13.932844 1793 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-hostproc\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.932839 kubelet[1793]: I0213 19:04:13.932851 1793 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-lib-modules\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.933164 kubelet[1793]: I0213 19:04:13.932861 1793 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-host-proc-sys-kernel\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.933164 kubelet[1793]: I0213 19:04:13.932869 1793 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-config-path\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.933164 kubelet[1793]: I0213 19:04:13.932878 1793 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-etc-cni-netd\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.933164 kubelet[1793]: I0213 19:04:13.932885 1793 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-cilium-cgroup\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:13.933164 kubelet[1793]: I0213 19:04:13.932893 1793 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b71c1b64-3f7d-4b64-9c96-00b45bca90d4-clustermesh-secrets\") on node \"10.0.0.49\" DevicePath \"\"" Feb 13 19:04:14.094487 kubelet[1793]: I0213 19:04:14.094362 1793 scope.go:117] "RemoveContainer" containerID="7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0" Feb 13 19:04:14.097043 containerd[1483]: time="2025-02-13T19:04:14.097002181Z" level=info msg="RemoveContainer for \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\"" Feb 13 19:04:14.099438 systemd[1]: Removed slice kubepods-burstable-podb71c1b64_3f7d_4b64_9c96_00b45bca90d4.slice - libcontainer container kubepods-burstable-podb71c1b64_3f7d_4b64_9c96_00b45bca90d4.slice. Feb 13 19:04:14.099739 systemd[1]: kubepods-burstable-podb71c1b64_3f7d_4b64_9c96_00b45bca90d4.slice: Consumed 6.828s CPU time, 123.9M memory peak, 220K read from disk, 12.9M written to disk. Feb 13 19:04:14.100654 containerd[1483]: time="2025-02-13T19:04:14.100318320Z" level=info msg="RemoveContainer for \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\" returns successfully" Feb 13 19:04:14.101198 kubelet[1793]: I0213 19:04:14.100864 1793 scope.go:117] "RemoveContainer" containerID="7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce" Feb 13 19:04:14.105848 containerd[1483]: time="2025-02-13T19:04:14.105732787Z" level=info msg="RemoveContainer for \"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce\"" Feb 13 19:04:14.108306 containerd[1483]: time="2025-02-13T19:04:14.108272693Z" level=info msg="RemoveContainer for \"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce\" returns successfully" Feb 13 19:04:14.108520 kubelet[1793]: I0213 19:04:14.108487 1793 scope.go:117] "RemoveContainer" containerID="961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e" Feb 13 19:04:14.110357 containerd[1483]: time="2025-02-13T19:04:14.110184613Z" level=info msg="RemoveContainer for \"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e\"" Feb 13 19:04:14.113773 containerd[1483]: time="2025-02-13T19:04:14.113737962Z" level=info msg="RemoveContainer for \"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e\" returns successfully" Feb 13 19:04:14.113999 kubelet[1793]: I0213 19:04:14.113980 1793 scope.go:117] "RemoveContainer" containerID="7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070" Feb 13 19:04:14.115123 containerd[1483]: time="2025-02-13T19:04:14.115092178Z" level=info msg="RemoveContainer for \"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070\"" Feb 13 19:04:14.117473 containerd[1483]: time="2025-02-13T19:04:14.117435637Z" level=info msg="RemoveContainer for \"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070\" returns successfully" Feb 13 19:04:14.117687 kubelet[1793]: I0213 19:04:14.117661 1793 scope.go:117] "RemoveContainer" containerID="d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9" Feb 13 19:04:14.118939 containerd[1483]: time="2025-02-13T19:04:14.118893418Z" level=info msg="RemoveContainer for \"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9\"" Feb 13 19:04:14.121578 containerd[1483]: time="2025-02-13T19:04:14.121467845Z" level=info msg="RemoveContainer for \"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9\" returns successfully" Feb 13 19:04:14.121712 kubelet[1793]: I0213 19:04:14.121687 1793 scope.go:117] "RemoveContainer" containerID="7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0" Feb 13 19:04:14.122049 containerd[1483]: time="2025-02-13T19:04:14.121945545Z" level=error msg="ContainerStatus for \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\": not found" Feb 13 19:04:14.122215 kubelet[1793]: E0213 19:04:14.122189 1793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\": not found" containerID="7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0" Feb 13 19:04:14.122304 kubelet[1793]: I0213 19:04:14.122227 1793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0"} err="failed to get container status \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e46441751a119922a1f5452535b2771c0a0d24306416a375fc8ff4d71bdf4b0\": not found" Feb 13 19:04:14.122336 kubelet[1793]: I0213 19:04:14.122307 1793 scope.go:117] "RemoveContainer" containerID="7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce" Feb 13 19:04:14.122551 containerd[1483]: time="2025-02-13T19:04:14.122523050Z" level=error msg="ContainerStatus for \"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce\": not found" Feb 13 19:04:14.122675 kubelet[1793]: E0213 19:04:14.122629 1793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce\": not found" containerID="7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce" Feb 13 19:04:14.122675 kubelet[1793]: I0213 19:04:14.122656 1793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce"} err="failed to get container status \"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"7724c692a1c14962ae32cb61887ee1b52a6056cd395c9d10bc963bf3bcb143ce\": not found" Feb 13 19:04:14.122675 kubelet[1793]: I0213 19:04:14.122672 1793 scope.go:117] "RemoveContainer" containerID="961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e" Feb 13 19:04:14.123080 containerd[1483]: time="2025-02-13T19:04:14.122983709Z" level=error msg="ContainerStatus for \"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e\": not found" Feb 13 19:04:14.123179 kubelet[1793]: E0213 19:04:14.123153 1793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e\": not found" containerID="961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e" Feb 13 19:04:14.123284 kubelet[1793]: I0213 19:04:14.123176 1793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e"} err="failed to get container status \"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e\": rpc error: code = NotFound desc = an error occurred when try to find container \"961cccd74dc21bd0f1169ac4daffc21290bf038a468a45fc9499ed97049b764e\": not found" Feb 13 19:04:14.123284 kubelet[1793]: I0213 19:04:14.123191 1793 scope.go:117] "RemoveContainer" containerID="7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070" Feb 13 19:04:14.123482 containerd[1483]: time="2025-02-13T19:04:14.123448888Z" level=error msg="ContainerStatus for \"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070\": not found" Feb 13 19:04:14.123638 kubelet[1793]: E0213 19:04:14.123611 1793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070\": not found" containerID="7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070" Feb 13 19:04:14.123679 kubelet[1793]: I0213 19:04:14.123645 1793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070"} err="failed to get container status \"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070\": rpc error: code = NotFound desc = an error occurred when try to find container \"7de2000276ee47016e44ee18104f293809b48e79587e3a9a2b70b247bc888070\": not found" Feb 13 19:04:14.123679 kubelet[1793]: I0213 19:04:14.123663 1793 scope.go:117] "RemoveContainer" containerID="d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9" Feb 13 19:04:14.123878 containerd[1483]: time="2025-02-13T19:04:14.123847465Z" level=error msg="ContainerStatus for \"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9\": not found" Feb 13 19:04:14.124143 kubelet[1793]: E0213 19:04:14.124121 1793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9\": not found" containerID="d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9" Feb 13 19:04:14.124194 kubelet[1793]: I0213 19:04:14.124153 1793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9"} err="failed to get container status \"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d43ea3a8e716f390172f5bf783c8e7222f63f01dae5106d696b6967cff3ab1a9\": not found" Feb 13 19:04:14.525249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d-rootfs.mount: Deactivated successfully. Feb 13 19:04:14.525374 systemd[1]: var-lib-kubelet-pods-b71c1b64\x2d3f7d\x2d4b64\x2d9c96\x2d00b45bca90d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dljb8h.mount: Deactivated successfully. Feb 13 19:04:14.525431 systemd[1]: var-lib-kubelet-pods-b71c1b64\x2d3f7d\x2d4b64\x2d9c96\x2d00b45bca90d4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:04:14.525484 systemd[1]: var-lib-kubelet-pods-b71c1b64\x2d3f7d\x2d4b64\x2d9c96\x2d00b45bca90d4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:04:14.876840 kubelet[1793]: E0213 19:04:14.876790 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:14.968071 kubelet[1793]: I0213 19:04:14.968025 1793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71c1b64-3f7d-4b64-9c96-00b45bca90d4" path="/var/lib/kubelet/pods/b71c1b64-3f7d-4b64-9c96-00b45bca90d4/volumes" Feb 13 19:04:15.881762 kubelet[1793]: E0213 19:04:15.881704 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:16.431990 kubelet[1793]: I0213 19:04:16.431946 1793 memory_manager.go:355] "RemoveStaleState removing state" podUID="b71c1b64-3f7d-4b64-9c96-00b45bca90d4" containerName="cilium-agent" Feb 13 19:04:16.442970 systemd[1]: Created slice kubepods-burstable-pod51db10e9_b0aa_4c96_b947_9ffaf50bd9ce.slice - libcontainer container kubepods-burstable-pod51db10e9_b0aa_4c96_b947_9ffaf50bd9ce.slice. Feb 13 19:04:16.460749 systemd[1]: Created slice kubepods-besteffort-poded8dab0c_0221_4e49_b2db_c691061ad547.slice - libcontainer container kubepods-besteffort-poded8dab0c_0221_4e49_b2db_c691061ad547.slice. Feb 13 19:04:16.548757 kubelet[1793]: I0213 19:04:16.548716 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-lib-modules\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.548757 kubelet[1793]: I0213 19:04:16.548759 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-xtables-lock\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549064 kubelet[1793]: I0213 19:04:16.548784 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-cilium-config-path\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549064 kubelet[1793]: I0213 19:04:16.548806 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-cilium-ipsec-secrets\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549064 kubelet[1793]: I0213 19:04:16.548822 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-hubble-tls\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549064 kubelet[1793]: I0213 19:04:16.548853 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-cilium-run\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549064 kubelet[1793]: I0213 19:04:16.548869 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvw2x\" (UniqueName: \"kubernetes.io/projected/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-kube-api-access-pvw2x\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549296 kubelet[1793]: I0213 19:04:16.548887 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-clustermesh-secrets\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549296 kubelet[1793]: I0213 19:04:16.548901 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-host-proc-sys-net\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549296 kubelet[1793]: I0213 19:04:16.548991 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-etc-cni-netd\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549296 kubelet[1793]: I0213 19:04:16.549034 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-hostproc\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549296 kubelet[1793]: I0213 19:04:16.549062 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed8dab0c-0221-4e49-b2db-c691061ad547-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-nqx8t\" (UID: \"ed8dab0c-0221-4e49-b2db-c691061ad547\") " pod="kube-system/cilium-operator-6c4d7847fc-nqx8t" Feb 13 19:04:16.549418 kubelet[1793]: I0213 19:04:16.549107 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-bpf-maps\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549418 kubelet[1793]: I0213 19:04:16.549125 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-cni-path\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549418 kubelet[1793]: I0213 19:04:16.549156 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-host-proc-sys-kernel\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.549418 kubelet[1793]: I0213 19:04:16.549190 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5846j\" (UniqueName: \"kubernetes.io/projected/ed8dab0c-0221-4e49-b2db-c691061ad547-kube-api-access-5846j\") pod \"cilium-operator-6c4d7847fc-nqx8t\" (UID: \"ed8dab0c-0221-4e49-b2db-c691061ad547\") " pod="kube-system/cilium-operator-6c4d7847fc-nqx8t" Feb 13 19:04:16.549418 kubelet[1793]: I0213 19:04:16.549206 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51db10e9-b0aa-4c96-b947-9ffaf50bd9ce-cilium-cgroup\") pod \"cilium-jx2cw\" (UID: \"51db10e9-b0aa-4c96-b947-9ffaf50bd9ce\") " pod="kube-system/cilium-jx2cw" Feb 13 19:04:16.758795 kubelet[1793]: E0213 19:04:16.758628 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:16.759661 containerd[1483]: time="2025-02-13T19:04:16.759597232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jx2cw,Uid:51db10e9-b0aa-4c96-b947-9ffaf50bd9ce,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:16.763970 kubelet[1793]: E0213 19:04:16.763665 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:16.764659 containerd[1483]: time="2025-02-13T19:04:16.764325855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nqx8t,Uid:ed8dab0c-0221-4e49-b2db-c691061ad547,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:16.786301 containerd[1483]: time="2025-02-13T19:04:16.785783483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:16.786924 containerd[1483]: time="2025-02-13T19:04:16.786284222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:16.786924 containerd[1483]: time="2025-02-13T19:04:16.786824483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:16.787488 containerd[1483]: time="2025-02-13T19:04:16.787415466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:16.791276 containerd[1483]: time="2025-02-13T19:04:16.791147690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:16.791276 containerd[1483]: time="2025-02-13T19:04:16.791216932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:16.791276 containerd[1483]: time="2025-02-13T19:04:16.791233453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:16.791435 containerd[1483]: time="2025-02-13T19:04:16.791324097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:16.808820 systemd[1]: Started cri-containerd-43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac.scope - libcontainer container 43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac. Feb 13 19:04:16.816804 systemd[1]: Started cri-containerd-9c55e05a784fe5079e775264b13f4e76ab38651c6139b5938fe5027797bd1f9c.scope - libcontainer container 9c55e05a784fe5079e775264b13f4e76ab38651c6139b5938fe5027797bd1f9c. Feb 13 19:04:16.834332 containerd[1483]: time="2025-02-13T19:04:16.834286635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jx2cw,Uid:51db10e9-b0aa-4c96-b947-9ffaf50bd9ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac\"" Feb 13 19:04:16.835273 kubelet[1793]: E0213 19:04:16.835248 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:16.838585 containerd[1483]: time="2025-02-13T19:04:16.838461796Z" level=info msg="CreateContainer within sandbox \"43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:04:16.853234 containerd[1483]: time="2025-02-13T19:04:16.853178324Z" level=info msg="CreateContainer within sandbox \"43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"109af7d1dc5b632d03ea8ae61c87525cec936632b28319810b3c4b8d539dcc7d\"" Feb 13 19:04:16.853967 containerd[1483]: time="2025-02-13T19:04:16.853938033Z" level=info msg="StartContainer for \"109af7d1dc5b632d03ea8ae61c87525cec936632b28319810b3c4b8d539dcc7d\"" Feb 13 19:04:16.859643 containerd[1483]: time="2025-02-13T19:04:16.858348843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nqx8t,Uid:ed8dab0c-0221-4e49-b2db-c691061ad547,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c55e05a784fe5079e775264b13f4e76ab38651c6139b5938fe5027797bd1f9c\"" Feb 13 19:04:16.859781 kubelet[1793]: E0213 19:04:16.859124 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:16.860499 containerd[1483]: time="2025-02-13T19:04:16.860470685Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:04:16.878127 systemd[1]: Started cri-containerd-109af7d1dc5b632d03ea8ae61c87525cec936632b28319810b3c4b8d539dcc7d.scope - libcontainer container 109af7d1dc5b632d03ea8ae61c87525cec936632b28319810b3c4b8d539dcc7d. Feb 13 19:04:16.882029 kubelet[1793]: E0213 19:04:16.881967 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:16.900520 containerd[1483]: time="2025-02-13T19:04:16.900477229Z" level=info msg="StartContainer for \"109af7d1dc5b632d03ea8ae61c87525cec936632b28319810b3c4b8d539dcc7d\" returns successfully" Feb 13 19:04:16.941845 systemd[1]: cri-containerd-109af7d1dc5b632d03ea8ae61c87525cec936632b28319810b3c4b8d539dcc7d.scope: Deactivated successfully. Feb 13 19:04:16.970020 containerd[1483]: time="2025-02-13T19:04:16.969948430Z" level=info msg="shim disconnected" id=109af7d1dc5b632d03ea8ae61c87525cec936632b28319810b3c4b8d539dcc7d namespace=k8s.io Feb 13 19:04:16.970020 containerd[1483]: time="2025-02-13T19:04:16.970015233Z" level=warning msg="cleaning up after shim disconnected" id=109af7d1dc5b632d03ea8ae61c87525cec936632b28319810b3c4b8d539dcc7d namespace=k8s.io Feb 13 19:04:16.970020 containerd[1483]: time="2025-02-13T19:04:16.970024793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:16.980887 kubelet[1793]: E0213 19:04:16.980839 1793 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:04:17.105549 kubelet[1793]: E0213 19:04:17.105519 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:17.107589 containerd[1483]: time="2025-02-13T19:04:17.107545782Z" level=info msg="CreateContainer within sandbox \"43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:04:17.117459 containerd[1483]: time="2025-02-13T19:04:17.117392388Z" level=info msg="CreateContainer within sandbox \"43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"72a6c6e656a0be1893df118b6d391545513a272a04c1e8e5aaf65a6393b0b685\"" Feb 13 19:04:17.118229 containerd[1483]: time="2025-02-13T19:04:17.118194338Z" level=info msg="StartContainer for \"72a6c6e656a0be1893df118b6d391545513a272a04c1e8e5aaf65a6393b0b685\"" Feb 13 19:04:17.143165 systemd[1]: Started cri-containerd-72a6c6e656a0be1893df118b6d391545513a272a04c1e8e5aaf65a6393b0b685.scope - libcontainer container 72a6c6e656a0be1893df118b6d391545513a272a04c1e8e5aaf65a6393b0b685. Feb 13 19:04:17.165886 containerd[1483]: time="2025-02-13T19:04:17.165834586Z" level=info msg="StartContainer for \"72a6c6e656a0be1893df118b6d391545513a272a04c1e8e5aaf65a6393b0b685\" returns successfully" Feb 13 19:04:17.204175 systemd[1]: cri-containerd-72a6c6e656a0be1893df118b6d391545513a272a04c1e8e5aaf65a6393b0b685.scope: Deactivated successfully. Feb 13 19:04:17.228385 containerd[1483]: time="2025-02-13T19:04:17.228298904Z" level=info msg="shim disconnected" id=72a6c6e656a0be1893df118b6d391545513a272a04c1e8e5aaf65a6393b0b685 namespace=k8s.io Feb 13 19:04:17.228385 containerd[1483]: time="2025-02-13T19:04:17.228381627Z" level=warning msg="cleaning up after shim disconnected" id=72a6c6e656a0be1893df118b6d391545513a272a04c1e8e5aaf65a6393b0b685 namespace=k8s.io Feb 13 19:04:17.228385 containerd[1483]: time="2025-02-13T19:04:17.228391627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:17.664163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1332535540.mount: Deactivated successfully. Feb 13 19:04:17.882965 kubelet[1793]: E0213 19:04:17.882918 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:17.985994 containerd[1483]: time="2025-02-13T19:04:17.985869298Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:17.987216 containerd[1483]: time="2025-02-13T19:04:17.987165426Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:04:17.988457 containerd[1483]: time="2025-02-13T19:04:17.988395071Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:17.989943 containerd[1483]: time="2025-02-13T19:04:17.989846725Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.129338319s" Feb 13 19:04:17.989943 containerd[1483]: time="2025-02-13T19:04:17.989889727Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:04:17.992421 containerd[1483]: time="2025-02-13T19:04:17.992375179Z" level=info msg="CreateContainer within sandbox \"9c55e05a784fe5079e775264b13f4e76ab38651c6139b5938fe5027797bd1f9c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:04:18.006922 containerd[1483]: time="2025-02-13T19:04:18.005384376Z" level=info msg="CreateContainer within sandbox \"9c55e05a784fe5079e775264b13f4e76ab38651c6139b5938fe5027797bd1f9c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8a43a1bd516870044691b1b272f6220cf7a16a5dff3db878f326786d183dc15b\"" Feb 13 19:04:18.009849 containerd[1483]: time="2025-02-13T19:04:18.009789813Z" level=info msg="StartContainer for \"8a43a1bd516870044691b1b272f6220cf7a16a5dff3db878f326786d183dc15b\"" Feb 13 19:04:18.034172 systemd[1]: Started cri-containerd-8a43a1bd516870044691b1b272f6220cf7a16a5dff3db878f326786d183dc15b.scope - libcontainer container 8a43a1bd516870044691b1b272f6220cf7a16a5dff3db878f326786d183dc15b. Feb 13 19:04:18.064819 containerd[1483]: time="2025-02-13T19:04:18.064773897Z" level=info msg="StartContainer for \"8a43a1bd516870044691b1b272f6220cf7a16a5dff3db878f326786d183dc15b\" returns successfully" Feb 13 19:04:18.110274 kubelet[1793]: E0213 19:04:18.110010 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:18.114974 kubelet[1793]: E0213 19:04:18.114944 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:18.117947 containerd[1483]: time="2025-02-13T19:04:18.117429338Z" level=info msg="CreateContainer within sandbox \"43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:04:18.127962 kubelet[1793]: I0213 19:04:18.121506 1793 setters.go:602] "Node became not ready" node="10.0.0.49" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:04:18Z","lastTransitionTime":"2025-02-13T19:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:04:18.127962 kubelet[1793]: I0213 19:04:18.126018 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-nqx8t" podStartSLOduration=0.995579925 podStartE2EDuration="2.126001564s" podCreationTimestamp="2025-02-13 19:04:16 +0000 UTC" firstStartedPulling="2025-02-13 19:04:16.860161313 +0000 UTC m=+50.509744502" lastFinishedPulling="2025-02-13 19:04:17.990582952 +0000 UTC m=+51.640166141" observedRunningTime="2025-02-13 19:04:18.124564753 +0000 UTC m=+51.774147942" watchObservedRunningTime="2025-02-13 19:04:18.126001564 +0000 UTC m=+51.775584753" Feb 13 19:04:18.216410 containerd[1483]: time="2025-02-13T19:04:18.216358952Z" level=info msg="CreateContainer within sandbox \"43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed930d06d6f80c227cba2d8b7b1af06e0a4a2722ee7aa0ad467b31c78aa465a5\"" Feb 13 19:04:18.217268 containerd[1483]: time="2025-02-13T19:04:18.217202422Z" level=info msg="StartContainer for \"ed930d06d6f80c227cba2d8b7b1af06e0a4a2722ee7aa0ad467b31c78aa465a5\"" Feb 13 19:04:18.242161 systemd[1]: Started cri-containerd-ed930d06d6f80c227cba2d8b7b1af06e0a4a2722ee7aa0ad467b31c78aa465a5.scope - libcontainer container ed930d06d6f80c227cba2d8b7b1af06e0a4a2722ee7aa0ad467b31c78aa465a5. Feb 13 19:04:18.269620 containerd[1483]: time="2025-02-13T19:04:18.269562012Z" level=info msg="StartContainer for \"ed930d06d6f80c227cba2d8b7b1af06e0a4a2722ee7aa0ad467b31c78aa465a5\" returns successfully" Feb 13 19:04:18.271035 systemd[1]: cri-containerd-ed930d06d6f80c227cba2d8b7b1af06e0a4a2722ee7aa0ad467b31c78aa465a5.scope: Deactivated successfully. Feb 13 19:04:18.294380 containerd[1483]: time="2025-02-13T19:04:18.294303856Z" level=info msg="shim disconnected" id=ed930d06d6f80c227cba2d8b7b1af06e0a4a2722ee7aa0ad467b31c78aa465a5 namespace=k8s.io Feb 13 19:04:18.294380 containerd[1483]: time="2025-02-13T19:04:18.294365178Z" level=warning msg="cleaning up after shim disconnected" id=ed930d06d6f80c227cba2d8b7b1af06e0a4a2722ee7aa0ad467b31c78aa465a5 namespace=k8s.io Feb 13 19:04:18.294380 containerd[1483]: time="2025-02-13T19:04:18.294373099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:18.883581 kubelet[1793]: E0213 19:04:18.883532 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:19.118388 kubelet[1793]: E0213 19:04:19.118340 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:19.118534 kubelet[1793]: E0213 19:04:19.118455 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:19.120445 containerd[1483]: time="2025-02-13T19:04:19.120389173Z" level=info msg="CreateContainer within sandbox \"43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:04:19.134156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079074793.mount: Deactivated successfully. Feb 13 19:04:19.135663 containerd[1483]: time="2025-02-13T19:04:19.135611937Z" level=info msg="CreateContainer within sandbox \"43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f011159e7c28b059ef270a4aa8f0a3ef0b83a15e07aa0af7d240accd95127417\"" Feb 13 19:04:19.136227 containerd[1483]: time="2025-02-13T19:04:19.136203277Z" level=info msg="StartContainer for \"f011159e7c28b059ef270a4aa8f0a3ef0b83a15e07aa0af7d240accd95127417\"" Feb 13 19:04:19.162127 systemd[1]: Started cri-containerd-f011159e7c28b059ef270a4aa8f0a3ef0b83a15e07aa0af7d240accd95127417.scope - libcontainer container f011159e7c28b059ef270a4aa8f0a3ef0b83a15e07aa0af7d240accd95127417. Feb 13 19:04:19.185731 systemd[1]: cri-containerd-f011159e7c28b059ef270a4aa8f0a3ef0b83a15e07aa0af7d240accd95127417.scope: Deactivated successfully. Feb 13 19:04:19.187747 containerd[1483]: time="2025-02-13T19:04:19.187700169Z" level=info msg="StartContainer for \"f011159e7c28b059ef270a4aa8f0a3ef0b83a15e07aa0af7d240accd95127417\" returns successfully" Feb 13 19:04:19.218568 containerd[1483]: time="2025-02-13T19:04:19.218331504Z" level=info msg="shim disconnected" id=f011159e7c28b059ef270a4aa8f0a3ef0b83a15e07aa0af7d240accd95127417 namespace=k8s.io Feb 13 19:04:19.218568 containerd[1483]: time="2025-02-13T19:04:19.218390546Z" level=warning msg="cleaning up after shim disconnected" id=f011159e7c28b059ef270a4aa8f0a3ef0b83a15e07aa0af7d240accd95127417 namespace=k8s.io Feb 13 19:04:19.218568 containerd[1483]: time="2025-02-13T19:04:19.218399346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:19.659128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f011159e7c28b059ef270a4aa8f0a3ef0b83a15e07aa0af7d240accd95127417-rootfs.mount: Deactivated successfully. Feb 13 19:04:19.883928 kubelet[1793]: E0213 19:04:19.883875 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:20.125173 kubelet[1793]: E0213 19:04:20.125135 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:20.127320 containerd[1483]: time="2025-02-13T19:04:20.127157394Z" level=info msg="CreateContainer within sandbox \"43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:04:20.187880 containerd[1483]: time="2025-02-13T19:04:20.187822408Z" level=info msg="CreateContainer within sandbox \"43ddff480abc16288a6157a2dbda93b00e7eba6db7e8b3c9a550ba7044e819ac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"35781742e9d2861a9bb96669fa3fefed95d9bafcac8b41fb3d3bb82362f72d05\"" Feb 13 19:04:20.188737 containerd[1483]: time="2025-02-13T19:04:20.188397187Z" level=info msg="StartContainer for \"35781742e9d2861a9bb96669fa3fefed95d9bafcac8b41fb3d3bb82362f72d05\"" Feb 13 19:04:20.235097 systemd[1]: Started cri-containerd-35781742e9d2861a9bb96669fa3fefed95d9bafcac8b41fb3d3bb82362f72d05.scope - libcontainer container 35781742e9d2861a9bb96669fa3fefed95d9bafcac8b41fb3d3bb82362f72d05. Feb 13 19:04:20.258820 containerd[1483]: time="2025-02-13T19:04:20.258770164Z" level=info msg="StartContainer for \"35781742e9d2861a9bb96669fa3fefed95d9bafcac8b41fb3d3bb82362f72d05\" returns successfully" Feb 13 19:04:20.527939 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:04:20.884311 kubelet[1793]: E0213 19:04:20.884254 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:21.130385 kubelet[1793]: E0213 19:04:21.130069 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:21.145437 kubelet[1793]: I0213 19:04:21.145289 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jx2cw" podStartSLOduration=5.145273149 podStartE2EDuration="5.145273149s" podCreationTimestamp="2025-02-13 19:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:21.145022221 +0000 UTC m=+54.794605410" watchObservedRunningTime="2025-02-13 19:04:21.145273149 +0000 UTC m=+54.794856338" Feb 13 19:04:21.885199 kubelet[1793]: E0213 19:04:21.885151 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:22.760195 kubelet[1793]: E0213 19:04:22.760116 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:22.886262 kubelet[1793]: E0213 19:04:22.886220 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:23.426661 systemd-networkd[1400]: lxc_health: Link UP Feb 13 19:04:23.432092 systemd-networkd[1400]: lxc_health: Gained carrier Feb 13 19:04:23.886961 kubelet[1793]: E0213 19:04:23.886879 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:24.760767 kubelet[1793]: E0213 19:04:24.760072 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:24.887975 kubelet[1793]: E0213 19:04:24.887924 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:25.137602 kubelet[1793]: E0213 19:04:25.137232 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:25.489069 systemd-networkd[1400]: lxc_health: Gained IPv6LL Feb 13 19:04:25.888474 kubelet[1793]: E0213 19:04:25.888420 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:26.141562 kubelet[1793]: E0213 19:04:26.138428 1793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:26.843205 kubelet[1793]: E0213 19:04:26.843158 1793 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:26.860322 containerd[1483]: time="2025-02-13T19:04:26.860274259Z" level=info msg="StopPodSandbox for \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\"" Feb 13 19:04:26.860656 containerd[1483]: time="2025-02-13T19:04:26.860364101Z" level=info msg="TearDown network for sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" successfully" Feb 13 19:04:26.860656 containerd[1483]: time="2025-02-13T19:04:26.860375061Z" level=info msg="StopPodSandbox for \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" returns successfully" Feb 13 19:04:26.860801 containerd[1483]: time="2025-02-13T19:04:26.860765072Z" level=info msg="RemovePodSandbox for \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\"" Feb 13 19:04:26.860801 containerd[1483]: time="2025-02-13T19:04:26.860791833Z" level=info msg="Forcibly stopping sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\"" Feb 13 19:04:26.860857 containerd[1483]: time="2025-02-13T19:04:26.860850274Z" level=info msg="TearDown network for sandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" successfully" Feb 13 19:04:26.869585 containerd[1483]: time="2025-02-13T19:04:26.869543872Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:26.869672 containerd[1483]: time="2025-02-13T19:04:26.869595633Z" level=info msg="RemovePodSandbox \"03fe3a4a1ac7ed7755ea17b5255b860f0a357fbb1bee2b5bf5c57e741d5cb29d\" returns successfully" Feb 13 19:04:26.888625 kubelet[1793]: E0213 19:04:26.888579 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:27.889688 kubelet[1793]: E0213 19:04:27.889636 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:28.889977 kubelet[1793]: E0213 19:04:28.889930 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:29.890175 kubelet[1793]: E0213 19:04:29.890096 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:30.890931 kubelet[1793]: E0213 19:04:30.890864 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"