Feb 13 19:18:05.922205 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:18:05.922225 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:18:05.922236 kernel: KASLR enabled Feb 13 19:18:05.922241 kernel: efi: EFI v2.7 by EDK II Feb 13 19:18:05.922247 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 19:18:05.922253 kernel: random: crng init done Feb 13 19:18:05.922260 kernel: secureboot: Secure boot disabled Feb 13 19:18:05.922266 kernel: ACPI: Early table checksum verification disabled Feb 13 19:18:05.922272 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:18:05.922280 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:18:05.922286 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:05.922292 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:05.922298 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:05.922304 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:05.922311 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:05.922319 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:05.922325 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:05.922332 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:05.922338 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:05.922344 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:18:05.922350 kernel: NUMA: Failed to initialise from firmware Feb 13 19:18:05.922356 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:18:05.922362 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Feb 13 19:18:05.922368 kernel: Zone ranges: Feb 13 19:18:05.922374 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:18:05.922381 kernel: DMA32 empty Feb 13 19:18:05.922387 kernel: Normal empty Feb 13 19:18:05.922393 kernel: Movable zone start for each node Feb 13 19:18:05.922400 kernel: Early memory node ranges Feb 13 19:18:05.922406 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:18:05.922423 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:18:05.922429 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:18:05.922435 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:18:05.922442 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:18:05.922448 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:18:05.922454 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:18:05.922460 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:18:05.922468 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:18:05.922474 kernel: psci: probing for conduit method from ACPI. Feb 13 19:18:05.922480 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:18:05.922489 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:18:05.922496 kernel: psci: Trusted OS migration not required Feb 13 19:18:05.922502 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:18:05.922510 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:18:05.922517 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:18:05.922523 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:18:05.922530 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:18:05.922537 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:18:05.922543 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:18:05.922550 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:18:05.922556 kernel: CPU features: detected: Spectre-v4 Feb 13 19:18:05.922563 kernel: CPU features: detected: Spectre-BHB Feb 13 19:18:05.922569 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:18:05.922577 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:18:05.922584 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:18:05.922590 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:18:05.922597 kernel: alternatives: applying boot alternatives Feb 13 19:18:05.922605 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:18:05.922612 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:18:05.922618 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:18:05.922625 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:18:05.922631 kernel: Fallback order for Node 0: 0 Feb 13 19:18:05.922638 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:18:05.922644 kernel: Policy zone: DMA Feb 13 19:18:05.922653 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:18:05.922659 kernel: software IO TLB: area num 4. Feb 13 19:18:05.922666 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:18:05.922672 kernel: Memory: 2386328K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185960K reserved, 0K cma-reserved) Feb 13 19:18:05.922679 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:18:05.922686 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:18:05.922693 kernel: rcu: RCU event tracing is enabled. Feb 13 19:18:05.922700 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:18:05.922706 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:18:05.922713 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:18:05.922719 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:18:05.922726 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:18:05.922734 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:18:05.922828 kernel: GICv3: 256 SPIs implemented Feb 13 19:18:05.922840 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:18:05.922847 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:18:05.922854 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:18:05.922861 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:18:05.922867 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:18:05.922874 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:18:05.922881 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:18:05.922887 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:18:05.922894 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:18:05.922904 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:18:05.922910 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:18:05.922917 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:18:05.922924 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:18:05.922930 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:18:05.922937 kernel: arm-pv: using stolen time PV Feb 13 19:18:05.922944 kernel: Console: colour dummy device 80x25 Feb 13 19:18:05.922951 kernel: ACPI: Core revision 20230628 Feb 13 19:18:05.922958 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:18:05.922964 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:18:05.922973 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:18:05.922980 kernel: landlock: Up and running. Feb 13 19:18:05.922986 kernel: SELinux: Initializing. Feb 13 19:18:05.922993 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:18:05.923000 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:18:05.923007 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:18:05.923014 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:18:05.923021 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:18:05.923028 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:18:05.923037 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:18:05.923043 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:18:05.923050 kernel: Remapping and enabling EFI services. Feb 13 19:18:05.923057 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:18:05.923065 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:18:05.923072 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:18:05.923079 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:18:05.923086 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:18:05.923092 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:18:05.923099 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:18:05.923108 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:18:05.923115 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:18:05.923130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:18:05.923140 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:18:05.923150 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:18:05.923157 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:18:05.923166 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:18:05.923175 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:18:05.923187 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:18:05.923196 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:18:05.923203 kernel: SMP: Total of 4 processors activated. Feb 13 19:18:05.923211 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:18:05.923218 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:18:05.923226 kernel: CPU features: detected: Common not Private translations Feb 13 19:18:05.923234 kernel: CPU features: detected: CRC32 instructions Feb 13 19:18:05.923241 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:18:05.923248 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:18:05.923257 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:18:05.923264 kernel: CPU features: detected: Privileged Access Never Feb 13 19:18:05.923271 kernel: CPU features: detected: RAS Extension Support Feb 13 19:18:05.923278 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:18:05.923286 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:18:05.923293 kernel: alternatives: applying system-wide alternatives Feb 13 19:18:05.923300 kernel: devtmpfs: initialized Feb 13 19:18:05.923308 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:18:05.923315 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:18:05.923324 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:18:05.923331 kernel: SMBIOS 3.0.0 present. Feb 13 19:18:05.923338 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:18:05.923345 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:18:05.923353 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:18:05.923360 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:18:05.923367 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:18:05.923375 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:18:05.923382 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 19:18:05.923391 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:18:05.923398 kernel: cpuidle: using governor menu Feb 13 19:18:05.923405 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:18:05.923420 kernel: ASID allocator initialised with 32768 entries Feb 13 19:18:05.923428 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:18:05.923435 kernel: Serial: AMBA PL011 UART driver Feb 13 19:18:05.923442 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:18:05.923449 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:18:05.923457 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:18:05.923466 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:18:05.923473 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:18:05.923481 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:18:05.923488 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:18:05.923495 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:18:05.923502 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:18:05.923510 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:18:05.923517 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:18:05.923525 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:18:05.923533 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:18:05.923541 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:18:05.923548 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:18:05.923555 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:18:05.923562 kernel: ACPI: Interpreter enabled Feb 13 19:18:05.923569 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:18:05.923577 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:18:05.923584 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:18:05.923591 kernel: printk: console [ttyAMA0] enabled Feb 13 19:18:05.923600 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:18:05.923831 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:18:05.923932 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:18:05.924002 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:18:05.924067 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:18:05.924131 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:18:05.924141 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:18:05.924152 kernel: PCI host bridge to bus 0000:00 Feb 13 19:18:05.924228 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:18:05.924290 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:18:05.924368 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:18:05.924443 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:18:05.924533 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:18:05.924612 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:18:05.924685 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:18:05.924796 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:18:05.924867 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:18:05.924933 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:18:05.924998 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:18:05.925063 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:18:05.925123 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:18:05.925186 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:18:05.925244 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:18:05.925253 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:18:05.925262 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:18:05.925269 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:18:05.925276 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:18:05.925284 kernel: iommu: Default domain type: Translated Feb 13 19:18:05.925291 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:18:05.925300 kernel: efivars: Registered efivars operations Feb 13 19:18:05.925308 kernel: vgaarb: loaded Feb 13 19:18:05.925315 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:18:05.925322 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:18:05.925330 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:18:05.925337 kernel: pnp: PnP ACPI init Feb 13 19:18:05.925415 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:18:05.925426 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:18:05.925436 kernel: NET: Registered PF_INET protocol family Feb 13 19:18:05.925444 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:18:05.925451 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:18:05.925461 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:18:05.925472 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:18:05.925479 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:18:05.925486 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:18:05.925494 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:18:05.925501 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:18:05.925510 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:18:05.925517 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:18:05.925525 kernel: kvm [1]: HYP mode not available Feb 13 19:18:05.925532 kernel: Initialise system trusted keyrings Feb 13 19:18:05.925540 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:18:05.925547 kernel: Key type asymmetric registered Feb 13 19:18:05.925554 kernel: Asymmetric key parser 'x509' registered Feb 13 19:18:05.925562 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:18:05.925569 kernel: io scheduler mq-deadline registered Feb 13 19:18:05.925578 kernel: io scheduler kyber registered Feb 13 19:18:05.925585 kernel: io scheduler bfq registered Feb 13 19:18:05.925592 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:18:05.925600 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:18:05.925607 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:18:05.925680 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:18:05.925690 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:18:05.925697 kernel: thunder_xcv, ver 1.0 Feb 13 19:18:05.925704 kernel: thunder_bgx, ver 1.0 Feb 13 19:18:05.925713 kernel: nicpf, ver 1.0 Feb 13 19:18:05.925721 kernel: nicvf, ver 1.0 Feb 13 19:18:05.925824 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:18:05.925892 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:18:05 UTC (1739474285) Feb 13 19:18:05.925902 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:18:05.925910 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:18:05.925917 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:18:05.925925 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:18:05.925935 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:18:05.925942 kernel: Segment Routing with IPv6 Feb 13 19:18:05.925949 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:18:05.925957 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:18:05.925964 kernel: Key type dns_resolver registered Feb 13 19:18:05.925971 kernel: registered taskstats version 1 Feb 13 19:18:05.925978 kernel: Loading compiled-in X.509 certificates Feb 13 19:18:05.925986 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:18:05.925993 kernel: Key type .fscrypt registered Feb 13 19:18:05.926001 kernel: Key type fscrypt-provisioning registered Feb 13 19:18:05.926009 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:18:05.926016 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:18:05.926023 kernel: ima: No architecture policies found Feb 13 19:18:05.926031 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:18:05.926038 kernel: clk: Disabling unused clocks Feb 13 19:18:05.926046 kernel: Freeing unused kernel memory: 39680K Feb 13 19:18:05.926053 kernel: Run /init as init process Feb 13 19:18:05.926060 kernel: with arguments: Feb 13 19:18:05.926069 kernel: /init Feb 13 19:18:05.926076 kernel: with environment: Feb 13 19:18:05.926083 kernel: HOME=/ Feb 13 19:18:05.926090 kernel: TERM=linux Feb 13 19:18:05.926097 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:18:05.926107 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:18:05.926117 systemd[1]: Detected virtualization kvm. Feb 13 19:18:05.926125 systemd[1]: Detected architecture arm64. Feb 13 19:18:05.926134 systemd[1]: Running in initrd. Feb 13 19:18:05.926141 systemd[1]: No hostname configured, using default hostname. Feb 13 19:18:05.926149 systemd[1]: Hostname set to . Feb 13 19:18:05.926157 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:18:05.926165 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:18:05.926173 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:18:05.926181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:18:05.926190 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:18:05.926199 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:18:05.926207 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:18:05.926215 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:18:05.926225 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:18:05.926233 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:18:05.926241 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:18:05.926249 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:18:05.926258 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:18:05.926266 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:18:05.926274 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:18:05.926282 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:18:05.926290 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:18:05.926298 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:18:05.926306 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:18:05.926314 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:18:05.926323 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:18:05.926332 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:18:05.926340 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:18:05.926348 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:18:05.926356 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:18:05.926364 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:18:05.926371 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:18:05.926379 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:18:05.926387 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:18:05.926397 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:18:05.926405 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:05.926422 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:18:05.926430 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:18:05.926438 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:18:05.926446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:18:05.926457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:05.926485 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 19:18:05.926507 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:18:05.926516 systemd-journald[238]: Journal started Feb 13 19:18:05.926536 systemd-journald[238]: Runtime Journal (/run/log/journal/966ddb68608645eebdd5ca6f17207a36) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:18:05.916226 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 19:18:05.930774 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:18:05.931204 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:18:05.935764 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:18:05.936944 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:18:05.939892 kernel: Bridge firewalling registered Feb 13 19:18:05.939561 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:18:05.939689 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 19:18:05.941197 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:18:05.946681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:18:05.949211 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:18:05.955195 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:05.959922 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:18:05.961162 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:18:05.963207 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:05.967065 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:18:05.974726 dracut-cmdline[271]: dracut-dracut-053 Feb 13 19:18:05.977197 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:18:05.996856 systemd-resolved[279]: Positive Trust Anchors: Feb 13 19:18:05.998791 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:18:05.998827 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:18:06.006277 systemd-resolved[279]: Defaulting to hostname 'linux'. Feb 13 19:18:06.007454 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:18:06.008607 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:18:06.054769 kernel: SCSI subsystem initialized Feb 13 19:18:06.059762 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:18:06.067771 kernel: iscsi: registered transport (tcp) Feb 13 19:18:06.080886 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:18:06.080908 kernel: QLogic iSCSI HBA Driver Feb 13 19:18:06.123648 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:18:06.135916 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:18:06.152772 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:18:06.152831 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:18:06.152854 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:18:06.202778 kernel: raid6: neonx8 gen() 15763 MB/s Feb 13 19:18:06.219762 kernel: raid6: neonx4 gen() 15619 MB/s Feb 13 19:18:06.236768 kernel: raid6: neonx2 gen() 13217 MB/s Feb 13 19:18:06.253764 kernel: raid6: neonx1 gen() 10454 MB/s Feb 13 19:18:06.270766 kernel: raid6: int64x8 gen() 6950 MB/s Feb 13 19:18:06.287763 kernel: raid6: int64x4 gen() 7315 MB/s Feb 13 19:18:06.304765 kernel: raid6: int64x2 gen() 6121 MB/s Feb 13 19:18:06.321863 kernel: raid6: int64x1 gen() 5049 MB/s Feb 13 19:18:06.321881 kernel: raid6: using algorithm neonx8 gen() 15763 MB/s Feb 13 19:18:06.339936 kernel: raid6: .... xor() 11931 MB/s, rmw enabled Feb 13 19:18:06.339952 kernel: raid6: using neon recovery algorithm Feb 13 19:18:06.346103 kernel: xor: measuring software checksum speed Feb 13 19:18:06.346134 kernel: 8regs : 18838 MB/sec Feb 13 19:18:06.346840 kernel: 32regs : 19641 MB/sec Feb 13 19:18:06.348097 kernel: arm64_neon : 26831 MB/sec Feb 13 19:18:06.348109 kernel: xor: using function: arm64_neon (26831 MB/sec) Feb 13 19:18:06.400771 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:18:06.413821 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:18:06.425921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:18:06.439177 systemd-udevd[461]: Using default interface naming scheme 'v255'. Feb 13 19:18:06.442368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:18:06.460962 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:18:06.474365 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Feb 13 19:18:06.504078 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:18:06.511960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:18:06.554943 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:18:06.566044 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:18:06.576976 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:18:06.579270 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:18:06.581045 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:18:06.583346 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:18:06.591056 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:18:06.600754 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:18:06.614905 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:18:06.623996 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:18:06.624111 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:18:06.624123 kernel: GPT:9289727 != 19775487 Feb 13 19:18:06.624133 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:18:06.624143 kernel: GPT:9289727 != 19775487 Feb 13 19:18:06.624160 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:18:06.624172 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:18:06.615767 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:18:06.615837 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:06.625834 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:18:06.627103 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:18:06.627179 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:06.630501 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:06.639923 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:06.647459 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) Feb 13 19:18:06.647511 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (510) Feb 13 19:18:06.650414 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:18:06.655066 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:18:06.657344 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:06.667863 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:18:06.671818 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:18:06.673067 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:18:06.685898 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:18:06.687825 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:18:06.692586 disk-uuid[549]: Primary Header is updated. Feb 13 19:18:06.692586 disk-uuid[549]: Secondary Entries is updated. Feb 13 19:18:06.692586 disk-uuid[549]: Secondary Header is updated. Feb 13 19:18:06.697765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:18:06.710813 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:07.709584 disk-uuid[550]: The operation has completed successfully. Feb 13 19:18:07.710670 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:18:07.736671 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:18:07.736830 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:18:07.766965 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:18:07.769812 sh[571]: Success Feb 13 19:18:07.790772 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:18:07.828202 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:18:07.836513 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:18:07.838029 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:18:07.852428 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:18:07.852482 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:18:07.852503 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:18:07.853269 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:18:07.853931 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:18:07.857540 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:18:07.858887 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:18:07.868930 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:18:07.871577 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:18:07.887434 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:18:07.887498 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:18:07.887518 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:18:07.890763 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:18:07.900171 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:18:07.901600 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:18:07.910163 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:18:07.920197 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:18:07.999370 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:18:08.017992 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:18:08.040572 systemd-networkd[756]: lo: Link UP Feb 13 19:18:08.040582 systemd-networkd[756]: lo: Gained carrier Feb 13 19:18:08.041375 systemd-networkd[756]: Enumeration completed Feb 13 19:18:08.041693 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:18:08.042922 systemd[1]: Reached target network.target - Network. Feb 13 19:18:08.044141 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:08.044144 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:18:08.044862 systemd-networkd[756]: eth0: Link UP Feb 13 19:18:08.044865 systemd-networkd[756]: eth0: Gained carrier Feb 13 19:18:08.044872 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:08.054514 ignition[666]: Ignition 2.20.0 Feb 13 19:18:08.054521 ignition[666]: Stage: fetch-offline Feb 13 19:18:08.054556 ignition[666]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:08.054563 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:08.054710 ignition[666]: parsed url from cmdline: "" Feb 13 19:18:08.054716 ignition[666]: no config URL provided Feb 13 19:18:08.054721 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:18:08.054727 ignition[666]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:18:08.054774 ignition[666]: op(1): [started] loading QEMU firmware config module Feb 13 19:18:08.054778 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:18:08.063409 ignition[666]: op(1): [finished] loading QEMU firmware config module Feb 13 19:18:08.067816 systemd-networkd[756]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:18:08.070657 ignition[666]: parsing config with SHA512: d151f104d24b9ebafeb04acb0d7fb628fc72928a115d9c48a34ab4c3a41882a418fb874f9516e2a2543f5c01fbd8f013685fa06484d2e6a6dff1f6cda27e2cb1 Feb 13 19:18:08.074386 unknown[666]: fetched base config from "system" Feb 13 19:18:08.074408 unknown[666]: fetched user config from "qemu" Feb 13 19:18:08.074666 ignition[666]: fetch-offline: fetch-offline passed Feb 13 19:18:08.075847 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:18:08.074737 ignition[666]: Ignition finished successfully Feb 13 19:18:08.078341 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:18:08.088972 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:18:08.100024 ignition[769]: Ignition 2.20.0 Feb 13 19:18:08.100036 ignition[769]: Stage: kargs Feb 13 19:18:08.100200 ignition[769]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:08.100210 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:08.100936 ignition[769]: kargs: kargs passed Feb 13 19:18:08.100984 ignition[769]: Ignition finished successfully Feb 13 19:18:08.104812 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:18:08.114914 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:18:08.124406 ignition[777]: Ignition 2.20.0 Feb 13 19:18:08.124417 ignition[777]: Stage: disks Feb 13 19:18:08.124577 ignition[777]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:08.124587 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:08.127279 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:18:08.125294 ignition[777]: disks: disks passed Feb 13 19:18:08.128829 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:18:08.125334 ignition[777]: Ignition finished successfully Feb 13 19:18:08.130576 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:18:08.132163 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:18:08.133978 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:18:08.135530 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:18:08.145946 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:18:08.155739 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:18:08.238579 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:18:08.246872 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:18:08.312620 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:18:08.314137 kernel: EXT4-fs (vda9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:18:08.313889 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:18:08.334865 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:18:08.336781 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:18:08.338026 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:18:08.338098 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:18:08.338152 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:18:08.350023 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (795) Feb 13 19:18:08.350144 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:18:08.350155 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:18:08.344012 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:18:08.353367 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:18:08.349171 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:18:08.355845 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:18:08.357714 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:18:08.393025 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:18:08.396455 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:18:08.401202 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:18:08.405833 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:18:08.502783 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:18:08.510862 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:18:08.512556 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:18:08.519759 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:18:08.538928 ignition[910]: INFO : Ignition 2.20.0 Feb 13 19:18:08.539912 ignition[910]: INFO : Stage: mount Feb 13 19:18:08.539912 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:08.539912 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:08.543422 ignition[910]: INFO : mount: mount passed Feb 13 19:18:08.543422 ignition[910]: INFO : Ignition finished successfully Feb 13 19:18:08.542948 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:18:08.559978 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:18:08.561132 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:18:08.850980 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:18:08.862210 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:18:08.867942 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (924) Feb 13 19:18:08.867980 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:18:08.870341 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:18:08.870371 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:18:08.872756 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:18:08.873868 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:18:08.902472 ignition[941]: INFO : Ignition 2.20.0 Feb 13 19:18:08.902472 ignition[941]: INFO : Stage: files Feb 13 19:18:08.904237 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:08.904237 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:08.904237 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:18:08.907616 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:18:08.907616 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:18:08.910771 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:18:08.912158 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:18:08.912158 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:18:08.911414 unknown[941]: wrote ssh authorized keys file for user: core Feb 13 19:18:08.915902 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:18:08.915902 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:18:08.915902 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:18:08.915902 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:18:08.915902 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:18:08.915902 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:18:08.915902 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:18:08.915902 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:18:09.177035 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:18:09.493879 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:18:09.493879 ignition[941]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:18:09.498014 ignition[941]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:18:09.498014 ignition[941]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:18:09.498014 ignition[941]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:18:09.498014 ignition[941]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:18:09.523386 ignition[941]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:18:09.527491 ignition[941]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:18:09.529877 ignition[941]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:18:09.529877 ignition[941]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:18:09.529877 ignition[941]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:18:09.529877 ignition[941]: INFO : files: files passed Feb 13 19:18:09.529877 ignition[941]: INFO : Ignition finished successfully Feb 13 19:18:09.530271 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:18:09.550953 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:18:09.552846 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:18:09.555463 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:18:09.555548 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:18:09.560991 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:18:09.564520 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:18:09.564520 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:18:09.567785 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:18:09.569483 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:18:09.571108 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:18:09.579899 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:18:09.599443 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:18:09.599568 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:18:09.601698 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:18:09.603563 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:18:09.605338 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:18:09.606062 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:18:09.620832 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:18:09.631982 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:18:09.642511 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:18:09.643760 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:18:09.645965 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:18:09.647704 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:18:09.647845 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:18:09.650288 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:18:09.652296 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:18:09.653894 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:18:09.655502 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:18:09.657337 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:18:09.659241 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:18:09.661016 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:18:09.662929 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:18:09.664767 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:18:09.666436 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:18:09.667911 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:18:09.668035 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:18:09.670285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:18:09.672212 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:18:09.674104 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:18:09.674813 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:18:09.676163 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:18:09.676269 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:18:09.678919 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:18:09.679028 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:18:09.680996 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:18:09.682532 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:18:09.686802 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:18:09.688109 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:18:09.690131 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:18:09.691676 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:18:09.691776 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:18:09.693284 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:18:09.693365 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:18:09.694952 systemd-networkd[756]: eth0: Gained IPv6LL Feb 13 19:18:09.695854 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:18:09.695982 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:18:09.697729 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:18:09.697844 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:18:09.706946 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:18:09.708535 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:18:09.709417 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:18:09.709547 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:18:09.711499 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:18:09.711601 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:18:09.716660 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:18:09.717787 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:18:09.720388 ignition[995]: INFO : Ignition 2.20.0 Feb 13 19:18:09.720388 ignition[995]: INFO : Stage: umount Feb 13 19:18:09.721982 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:09.721982 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:09.721982 ignition[995]: INFO : umount: umount passed Feb 13 19:18:09.721982 ignition[995]: INFO : Ignition finished successfully Feb 13 19:18:09.722679 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:18:09.722803 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:18:09.724525 systemd[1]: Stopped target network.target - Network. Feb 13 19:18:09.725369 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:18:09.725441 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:18:09.729001 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:18:09.729051 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:18:09.730851 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:18:09.730896 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:18:09.732417 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:18:09.732462 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:18:09.734278 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:18:09.737772 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:18:09.742457 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:18:09.743231 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:18:09.744780 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:18:09.747107 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:18:09.748757 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:18:09.750255 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:18:09.750347 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:18:09.751436 systemd-networkd[756]: eth0: DHCPv6 lease lost Feb 13 19:18:09.752358 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:18:09.752418 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:18:09.754722 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:18:09.754837 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:18:09.757145 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:18:09.757192 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:18:09.768897 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:18:09.770243 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:18:09.770314 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:18:09.772357 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:18:09.772415 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:09.774162 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:18:09.774212 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:18:09.776194 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:18:09.786679 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:18:09.786808 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:18:09.800874 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:18:09.801084 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:18:09.803273 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:18:09.803313 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:18:09.805065 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:18:09.805103 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:18:09.806851 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:18:09.806900 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:18:09.809581 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:18:09.809627 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:18:09.812234 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:18:09.812279 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:09.825894 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:18:09.826874 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:18:09.826932 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:18:09.828962 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:18:09.829005 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:18:09.830893 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:18:09.830936 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:18:09.833054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:18:09.833101 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:09.835325 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:18:09.835439 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:18:09.837755 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:18:09.839918 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:18:09.849448 systemd[1]: Switching root. Feb 13 19:18:09.882897 systemd-journald[238]: Journal stopped Feb 13 19:18:10.659403 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 19:18:10.659454 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:18:10.659469 kernel: SELinux: policy capability open_perms=1 Feb 13 19:18:10.659479 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:18:10.659488 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:18:10.659499 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:18:10.659508 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:18:10.659517 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:18:10.659526 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:18:10.659535 kernel: audit: type=1403 audit(1739474290.017:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:18:10.659546 systemd[1]: Successfully loaded SELinux policy in 33.601ms. Feb 13 19:18:10.659566 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.995ms. Feb 13 19:18:10.659581 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:18:10.659592 systemd[1]: Detected virtualization kvm. Feb 13 19:18:10.659602 systemd[1]: Detected architecture arm64. Feb 13 19:18:10.659611 systemd[1]: Detected first boot. Feb 13 19:18:10.659622 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:18:10.659633 zram_generator::config[1040]: No configuration found. Feb 13 19:18:10.659644 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:18:10.659654 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:18:10.659664 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:18:10.659674 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:18:10.659684 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:18:10.659695 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:18:10.659706 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:18:10.659716 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:18:10.659731 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:18:10.659758 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:18:10.659772 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:18:10.659783 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:18:10.659793 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:18:10.659803 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:18:10.659813 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:18:10.659826 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:18:10.659836 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:18:10.659846 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:18:10.659856 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:18:10.659866 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:18:10.659876 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:18:10.659886 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:18:10.659896 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:18:10.659908 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:18:10.659919 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:18:10.659929 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:18:10.659939 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:18:10.659949 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:18:10.659959 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:18:10.659970 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:18:10.659981 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:18:10.659991 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:18:10.660003 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:18:10.660013 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:18:10.660036 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:18:10.660046 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:18:10.660056 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:18:10.660065 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:18:10.660075 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:18:10.660086 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:18:10.660096 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:18:10.660108 systemd[1]: Reached target machines.target - Containers. Feb 13 19:18:10.660119 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:18:10.660129 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:18:10.660140 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:18:10.660150 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:18:10.660160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:18:10.660170 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:18:10.660180 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:18:10.660191 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:18:10.660202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:18:10.660213 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:18:10.660223 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:18:10.660233 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:18:10.660243 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:18:10.660253 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:18:10.660263 kernel: fuse: init (API version 7.39) Feb 13 19:18:10.660272 kernel: ACPI: bus type drm_connector registered Feb 13 19:18:10.660282 kernel: loop: module loaded Feb 13 19:18:10.660292 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:18:10.660302 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:18:10.660312 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:18:10.660338 systemd-journald[1111]: Collecting audit messages is disabled. Feb 13 19:18:10.660358 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:18:10.660375 systemd-journald[1111]: Journal started Feb 13 19:18:10.660399 systemd-journald[1111]: Runtime Journal (/run/log/journal/966ddb68608645eebdd5ca6f17207a36) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:18:10.461395 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:18:10.478338 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:18:10.478680 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:18:10.662769 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:18:10.665199 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:18:10.665221 systemd[1]: Stopped verity-setup.service. Feb 13 19:18:10.669236 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:18:10.669899 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:18:10.671109 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:18:10.672281 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:18:10.673398 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:18:10.674638 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:18:10.675852 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:18:10.677065 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:18:10.678517 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:18:10.680014 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:18:10.680154 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:18:10.681641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:18:10.681831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:18:10.683130 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:18:10.683268 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:18:10.686064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:18:10.686215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:18:10.687603 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:18:10.687734 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:18:10.689104 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:18:10.689248 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:18:10.690561 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:18:10.693062 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:18:10.694501 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:18:10.706451 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:18:10.723891 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:18:10.726073 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:18:10.727167 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:18:10.727206 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:18:10.729180 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:18:10.731403 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:18:10.733585 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:18:10.734726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:18:10.736345 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:18:10.739950 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:18:10.741243 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:18:10.742940 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:18:10.744291 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:18:10.747997 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:18:10.748853 systemd-journald[1111]: Time spent on flushing to /var/log/journal/966ddb68608645eebdd5ca6f17207a36 is 22.059ms for 839 entries. Feb 13 19:18:10.748853 systemd-journald[1111]: System Journal (/var/log/journal/966ddb68608645eebdd5ca6f17207a36) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:18:10.897691 systemd-journald[1111]: Received client request to flush runtime journal. Feb 13 19:18:10.897858 kernel: loop0: detected capacity change from 0 to 113536 Feb 13 19:18:10.897886 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:18:10.897908 kernel: loop1: detected capacity change from 0 to 116808 Feb 13 19:18:10.751464 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:18:10.754020 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:18:10.758816 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:18:10.761074 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:18:10.762513 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:18:10.764032 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:18:10.766776 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:18:10.771998 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:18:10.788593 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:18:10.795835 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:18:10.798785 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:10.800968 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Feb 13 19:18:10.800978 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Feb 13 19:18:10.806663 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:18:10.819039 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:18:10.820246 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:18:10.899774 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:18:10.917566 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:18:10.920107 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:18:10.922550 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:18:10.929942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:18:10.935786 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 19:18:10.944150 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 19:18:10.944169 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 19:18:10.948771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:18:10.964772 kernel: loop3: detected capacity change from 0 to 113536 Feb 13 19:18:10.977971 kernel: loop4: detected capacity change from 0 to 116808 Feb 13 19:18:10.983783 kernel: loop5: detected capacity change from 0 to 189592 Feb 13 19:18:10.989916 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:18:10.990309 (sd-merge)[1179]: Merged extensions into '/usr'. Feb 13 19:18:10.995571 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:18:10.995588 systemd[1]: Reloading... Feb 13 19:18:11.054840 zram_generator::config[1208]: No configuration found. Feb 13 19:18:11.151566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:18:11.170299 ldconfig[1147]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:18:11.187739 systemd[1]: Reloading finished in 191 ms. Feb 13 19:18:11.214607 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:18:11.216259 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:18:11.224971 systemd[1]: Starting ensure-sysext.service... Feb 13 19:18:11.226995 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:18:11.240762 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:18:11.240777 systemd[1]: Reloading... Feb 13 19:18:11.247497 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:18:11.247788 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:18:11.248404 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:18:11.248615 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 19:18:11.248666 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 19:18:11.251186 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:18:11.251200 systemd-tmpfiles[1240]: Skipping /boot Feb 13 19:18:11.258267 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:18:11.258285 systemd-tmpfiles[1240]: Skipping /boot Feb 13 19:18:11.290767 zram_generator::config[1268]: No configuration found. Feb 13 19:18:11.373171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:18:11.408026 systemd[1]: Reloading finished in 166 ms. Feb 13 19:18:11.424583 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:18:11.426128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:18:11.439236 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:18:11.441459 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:18:11.444908 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:18:11.450010 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:18:11.452403 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:18:11.457988 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:18:11.464373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:18:11.467973 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:18:11.472001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:18:11.475812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:18:11.482396 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:18:11.483676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:18:11.484455 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:18:11.487500 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:18:11.487764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:18:11.490312 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:18:11.490461 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:18:11.492125 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:18:11.492273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:18:11.492508 systemd-udevd[1308]: Using default interface naming scheme 'v255'. Feb 13 19:18:11.498293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:18:11.498537 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:18:11.508188 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:18:11.511951 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:18:11.516637 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:18:11.518772 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:18:11.528166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:18:11.544799 augenrules[1362]: No rules Feb 13 19:18:11.546239 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:18:11.549526 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:18:11.553157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:18:11.556463 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:18:11.557969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:18:11.559986 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:18:11.561719 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:18:11.562590 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:18:11.564559 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:18:11.566782 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:18:11.569802 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1339) Feb 13 19:18:11.571994 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:18:11.573655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:18:11.574085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:18:11.575692 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:18:11.577937 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:18:11.579625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:18:11.581776 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:18:11.583530 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:18:11.583739 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:18:11.589779 systemd[1]: Finished ensure-sysext.service. Feb 13 19:18:11.598249 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:18:11.604706 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:18:11.604761 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:18:11.611175 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:18:11.636503 systemd-resolved[1307]: Positive Trust Anchors: Feb 13 19:18:11.636580 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:18:11.636612 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:18:11.645229 systemd-resolved[1307]: Defaulting to hostname 'linux'. Feb 13 19:18:11.646707 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:18:11.647944 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:18:11.656340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:18:11.667046 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:18:11.676780 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:18:11.678181 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:18:11.680198 systemd-networkd[1372]: lo: Link UP Feb 13 19:18:11.680457 systemd-networkd[1372]: lo: Gained carrier Feb 13 19:18:11.681287 systemd-networkd[1372]: Enumeration completed Feb 13 19:18:11.681392 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:18:11.682049 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:11.682125 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:18:11.683047 systemd[1]: Reached target network.target - Network. Feb 13 19:18:11.683456 systemd-networkd[1372]: eth0: Link UP Feb 13 19:18:11.683515 systemd-networkd[1372]: eth0: Gained carrier Feb 13 19:18:11.683571 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:11.693971 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:18:11.694844 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:18:11.697629 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:18:11.701178 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Feb 13 19:18:11.701726 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:18:11.701843 systemd-timesyncd[1386]: Initial clock synchronization to Thu 2025-02-13 19:18:11.633799 UTC. Feb 13 19:18:11.704712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:11.713892 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:18:11.716604 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:18:11.741795 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:18:11.749760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:11.774803 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:18:11.776238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:18:11.777352 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:18:11.778448 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:18:11.779650 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:18:11.781064 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:18:11.782173 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:18:11.783535 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:18:11.784720 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:18:11.784766 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:18:11.785606 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:18:11.787320 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:18:11.789757 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:18:11.797692 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:18:11.799960 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:18:11.801513 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:18:11.802771 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:18:11.803698 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:18:11.804667 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:18:11.804699 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:18:11.805694 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:18:11.808787 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:18:11.807723 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:18:11.810954 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:18:11.813675 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:18:11.819622 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:18:11.820903 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:18:11.823095 jq[1410]: false Feb 13 19:18:11.823923 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:18:11.826081 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:18:11.829210 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:18:11.833735 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:18:11.834205 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:18:11.837560 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:18:11.839876 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:18:11.844738 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:18:11.847543 jq[1422]: true Feb 13 19:18:11.848185 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:18:11.848366 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:18:11.848624 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:18:11.848766 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:18:11.851182 dbus-daemon[1409]: [system] SELinux support is enabled Feb 13 19:18:11.853215 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:18:11.859714 extend-filesystems[1411]: Found loop3 Feb 13 19:18:11.859714 extend-filesystems[1411]: Found loop4 Feb 13 19:18:11.859714 extend-filesystems[1411]: Found loop5 Feb 13 19:18:11.859714 extend-filesystems[1411]: Found vda Feb 13 19:18:11.859714 extend-filesystems[1411]: Found vda1 Feb 13 19:18:11.859714 extend-filesystems[1411]: Found vda2 Feb 13 19:18:11.859714 extend-filesystems[1411]: Found vda3 Feb 13 19:18:11.859714 extend-filesystems[1411]: Found usr Feb 13 19:18:11.859714 extend-filesystems[1411]: Found vda4 Feb 13 19:18:11.859714 extend-filesystems[1411]: Found vda6 Feb 13 19:18:11.859714 extend-filesystems[1411]: Found vda7 Feb 13 19:18:11.859714 extend-filesystems[1411]: Found vda9 Feb 13 19:18:11.859714 extend-filesystems[1411]: Checking size of /dev/vda9 Feb 13 19:18:11.859948 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:18:11.859981 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:18:11.883274 jq[1428]: true Feb 13 19:18:11.861693 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:18:11.861708 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:18:11.865939 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:18:11.867887 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:18:11.877100 (ntainerd)[1436]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:18:11.888838 extend-filesystems[1411]: Resized partition /dev/vda9 Feb 13 19:18:11.901413 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:18:11.904019 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1338) Feb 13 19:18:11.907177 update_engine[1420]: I20250213 19:18:11.907017 1420 main.cc:92] Flatcar Update Engine starting Feb 13 19:18:11.911785 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:18:11.912375 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:18:11.912603 systemd-logind[1417]: New seat seat0. Feb 13 19:18:11.913200 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:18:11.913295 update_engine[1420]: I20250213 19:18:11.913244 1420 update_check_scheduler.cc:74] Next update check in 6m8s Feb 13 19:18:11.916523 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:18:11.929371 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:18:11.941129 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:18:11.967651 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:18:11.967651 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:18:11.967651 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:18:11.973324 extend-filesystems[1411]: Resized filesystem in /dev/vda9 Feb 13 19:18:11.968666 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:18:11.970097 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:18:11.976352 bash[1459]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:18:11.978014 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:18:11.980255 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:18:11.981737 locksmithd[1452]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:18:12.053585 sshd_keygen[1423]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:18:12.073315 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:18:12.083555 containerd[1436]: time="2025-02-13T19:18:12.082320101Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:18:12.083996 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:18:12.090647 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:18:12.090840 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:18:12.094558 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:18:12.106415 containerd[1436]: time="2025-02-13T19:18:12.106358691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108356 containerd[1436]: time="2025-02-13T19:18:12.107828631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108356 containerd[1436]: time="2025-02-13T19:18:12.107865155Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:18:12.108356 containerd[1436]: time="2025-02-13T19:18:12.107882920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:18:12.108356 containerd[1436]: time="2025-02-13T19:18:12.108059408Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:18:12.108356 containerd[1436]: time="2025-02-13T19:18:12.108077491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108356 containerd[1436]: time="2025-02-13T19:18:12.108130346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108356 containerd[1436]: time="2025-02-13T19:18:12.108141976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108356 containerd[1436]: time="2025-02-13T19:18:12.108284211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108356 containerd[1436]: time="2025-02-13T19:18:12.108298908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108356 containerd[1436]: time="2025-02-13T19:18:12.108311017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108356 containerd[1436]: time="2025-02-13T19:18:12.108319500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108288 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:18:12.108643 containerd[1436]: time="2025-02-13T19:18:12.108381317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108643 containerd[1436]: time="2025-02-13T19:18:12.108553345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108680 containerd[1436]: time="2025-02-13T19:18:12.108642286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:12.108680 containerd[1436]: time="2025-02-13T19:18:12.108655271Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:18:12.108754 containerd[1436]: time="2025-02-13T19:18:12.108725491Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:18:12.108824 containerd[1436]: time="2025-02-13T19:18:12.108793761Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:18:12.113407 containerd[1436]: time="2025-02-13T19:18:12.113308255Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:18:12.113407 containerd[1436]: time="2025-02-13T19:18:12.113371626Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:18:12.113779 containerd[1436]: time="2025-02-13T19:18:12.113715680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:18:12.114398 containerd[1436]: time="2025-02-13T19:18:12.114364756Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.114511173Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.114672247Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.114936840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.115040319Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.115056729Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.115070869Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.115083973Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.115097316Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.115109943Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.115122967Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.115137625Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.115150132Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.115162160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:18:12.116121 containerd[1436]: time="2025-02-13T19:18:12.115172755Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115194025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115209638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115222065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115328890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115340600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115353824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115364937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115376966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115388875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115402059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115412693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115428227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115439499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115453719Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:18:12.116405 containerd[1436]: time="2025-02-13T19:18:12.115473275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116650 containerd[1436]: time="2025-02-13T19:18:12.115486499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116650 containerd[1436]: time="2025-02-13T19:18:12.115496297Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:18:12.116650 containerd[1436]: time="2025-02-13T19:18:12.115670476Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:18:12.116650 containerd[1436]: time="2025-02-13T19:18:12.115689515Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:18:12.116650 containerd[1436]: time="2025-02-13T19:18:12.115700030Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:18:12.116650 containerd[1436]: time="2025-02-13T19:18:12.115711700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:18:12.116650 containerd[1436]: time="2025-02-13T19:18:12.115720622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116650 containerd[1436]: time="2025-02-13T19:18:12.115732173Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:18:12.116650 containerd[1436]: time="2025-02-13T19:18:12.115757943Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:18:12.116650 containerd[1436]: time="2025-02-13T19:18:12.115775827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:18:12.116870 containerd[1436]: time="2025-02-13T19:18:12.116037433Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:18:12.116870 containerd[1436]: time="2025-02-13T19:18:12.116085229Z" level=info msg="Connect containerd service" Feb 13 19:18:12.116870 containerd[1436]: time="2025-02-13T19:18:12.116111956Z" level=info msg="using legacy CRI server" Feb 13 19:18:12.116870 containerd[1436]: time="2025-02-13T19:18:12.116119006Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:18:12.116870 containerd[1436]: time="2025-02-13T19:18:12.116359621Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:18:12.117067 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:18:12.117880 containerd[1436]: time="2025-02-13T19:18:12.117604918Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:18:12.117880 containerd[1436]: time="2025-02-13T19:18:12.117836730Z" level=info msg="Start subscribing containerd event" Feb 13 19:18:12.117974 containerd[1436]: time="2025-02-13T19:18:12.117900579Z" level=info msg="Start recovering state" Feb 13 19:18:12.117994 containerd[1436]: time="2025-02-13T19:18:12.117982669Z" level=info msg="Start event monitor" Feb 13 19:18:12.118012 containerd[1436]: time="2025-02-13T19:18:12.117995295Z" level=info msg="Start snapshots syncer" Feb 13 19:18:12.118047 containerd[1436]: time="2025-02-13T19:18:12.118027000Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:18:12.118047 containerd[1436]: time="2025-02-13T19:18:12.118043331Z" level=info msg="Start streaming server" Feb 13 19:18:12.118520 containerd[1436]: time="2025-02-13T19:18:12.118495565Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:18:12.118613 containerd[1436]: time="2025-02-13T19:18:12.118599164Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:18:12.118718 containerd[1436]: time="2025-02-13T19:18:12.118705909Z" level=info msg="containerd successfully booted in 0.037772s" Feb 13 19:18:12.119312 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:18:12.120613 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:18:12.122094 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:18:12.762892 systemd-networkd[1372]: eth0: Gained IPv6LL Feb 13 19:18:12.765310 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:18:12.767570 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:18:12.776978 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:18:12.779424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:12.781539 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:18:12.797867 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:18:12.798065 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:18:12.800039 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:18:12.803351 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:18:13.288948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:13.290543 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:18:13.293125 (kubelet)[1516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:18:13.297794 systemd[1]: Startup finished in 555ms (kernel) + 4.298s (initrd) + 3.328s (userspace) = 8.182s. Feb 13 19:18:13.771361 kubelet[1516]: E0213 19:18:13.771248 1516 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:18:13.774065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:18:13.774224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:18:18.274999 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:18:18.287546 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:49970.service - OpenSSH per-connection server daemon (10.0.0.1:49970). Feb 13 19:18:18.370323 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 49970 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:18.374853 sshd-session[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:18.384503 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:18:18.394028 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:18:18.396479 systemd-logind[1417]: New session 1 of user core. Feb 13 19:18:18.408908 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:18:18.425179 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:18:18.428058 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:18:18.505809 systemd[1534]: Queued start job for default target default.target. Feb 13 19:18:18.515679 systemd[1534]: Created slice app.slice - User Application Slice. Feb 13 19:18:18.515723 systemd[1534]: Reached target paths.target - Paths. Feb 13 19:18:18.515734 systemd[1534]: Reached target timers.target - Timers. Feb 13 19:18:18.516991 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:18:18.529901 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:18:18.530003 systemd[1534]: Reached target sockets.target - Sockets. Feb 13 19:18:18.530022 systemd[1534]: Reached target basic.target - Basic System. Feb 13 19:18:18.530058 systemd[1534]: Reached target default.target - Main User Target. Feb 13 19:18:18.530084 systemd[1534]: Startup finished in 95ms. Feb 13 19:18:18.530190 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:18:18.531359 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:18:18.604845 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:49982.service - OpenSSH per-connection server daemon (10.0.0.1:49982). Feb 13 19:18:18.643142 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 49982 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:18.644430 sshd-session[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:18.650483 systemd-logind[1417]: New session 2 of user core. Feb 13 19:18:18.659964 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:18:18.718814 sshd[1547]: Connection closed by 10.0.0.1 port 49982 Feb 13 19:18:18.719526 sshd-session[1545]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:18.729193 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:49982.service: Deactivated successfully. Feb 13 19:18:18.730760 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:18:18.732876 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:18:18.747103 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:49986.service - OpenSSH per-connection server daemon (10.0.0.1:49986). Feb 13 19:18:18.748138 systemd-logind[1417]: Removed session 2. Feb 13 19:18:18.787892 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 49986 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:18.789168 sshd-session[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:18.793657 systemd-logind[1417]: New session 3 of user core. Feb 13 19:18:18.799937 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:18:18.849718 sshd[1554]: Connection closed by 10.0.0.1 port 49986 Feb 13 19:18:18.849583 sshd-session[1552]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:18.863241 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:49986.service: Deactivated successfully. Feb 13 19:18:18.864621 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:18:18.865806 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:18:18.866918 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:49998.service - OpenSSH per-connection server daemon (10.0.0.1:49998). Feb 13 19:18:18.867666 systemd-logind[1417]: Removed session 3. Feb 13 19:18:18.911723 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 49998 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:18.913058 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:18.916930 systemd-logind[1417]: New session 4 of user core. Feb 13 19:18:18.928946 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:18:18.983349 sshd[1561]: Connection closed by 10.0.0.1 port 49998 Feb 13 19:18:18.983224 sshd-session[1559]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:18.993543 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:49998.service: Deactivated successfully. Feb 13 19:18:18.995416 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:18:18.996917 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:18:19.010077 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:50006.service - OpenSSH per-connection server daemon (10.0.0.1:50006). Feb 13 19:18:19.010927 systemd-logind[1417]: Removed session 4. Feb 13 19:18:19.051327 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 50006 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:19.052617 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:19.056135 systemd-logind[1417]: New session 5 of user core. Feb 13 19:18:19.074923 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:18:19.136157 sudo[1569]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:18:19.140172 sudo[1569]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:19.153779 sudo[1569]: pam_unix(sudo:session): session closed for user root Feb 13 19:18:19.158021 sshd[1568]: Connection closed by 10.0.0.1 port 50006 Feb 13 19:18:19.157857 sshd-session[1566]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:19.169322 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:50006.service: Deactivated successfully. Feb 13 19:18:19.170699 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:18:19.171916 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:18:19.173211 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:50016.service - OpenSSH per-connection server daemon (10.0.0.1:50016). Feb 13 19:18:19.175409 systemd-logind[1417]: Removed session 5. Feb 13 19:18:19.217281 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 50016 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:19.218642 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:19.224842 systemd-logind[1417]: New session 6 of user core. Feb 13 19:18:19.232925 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:18:19.284064 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:18:19.284333 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:19.288068 sudo[1578]: pam_unix(sudo:session): session closed for user root Feb 13 19:18:19.294371 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:18:19.295795 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:19.315094 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:18:19.341547 augenrules[1600]: No rules Feb 13 19:18:19.342847 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:18:19.343840 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:18:19.346018 sudo[1577]: pam_unix(sudo:session): session closed for user root Feb 13 19:18:19.347378 sshd[1576]: Connection closed by 10.0.0.1 port 50016 Feb 13 19:18:19.347995 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:19.362158 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:50016.service: Deactivated successfully. Feb 13 19:18:19.363808 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:18:19.366165 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:18:19.367376 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:50024.service - OpenSSH per-connection server daemon (10.0.0.1:50024). Feb 13 19:18:19.368181 systemd-logind[1417]: Removed session 6. Feb 13 19:18:19.411089 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 50024 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:18:19.412458 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:19.416471 systemd-logind[1417]: New session 7 of user core. Feb 13 19:18:19.433920 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:18:19.485610 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:18:19.486220 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:19.509116 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:18:19.524639 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:18:19.525855 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:18:19.990285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:20.005063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:20.025258 systemd[1]: Reloading requested from client PID 1652 ('systemctl') (unit session-7.scope)... Feb 13 19:18:20.025277 systemd[1]: Reloading... Feb 13 19:18:20.083872 zram_generator::config[1693]: No configuration found. Feb 13 19:18:20.198798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:18:20.248865 systemd[1]: Reloading finished in 223 ms. Feb 13 19:18:20.283982 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:18:20.284055 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:18:20.284252 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:20.287280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:20.386436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:20.390731 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:18:20.425805 kubelet[1736]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:18:20.425805 kubelet[1736]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:18:20.425805 kubelet[1736]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:18:20.426108 kubelet[1736]: I0213 19:18:20.425914 1736 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:18:20.878766 kubelet[1736]: I0213 19:18:20.878324 1736 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:18:20.878766 kubelet[1736]: I0213 19:18:20.878354 1736 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:18:20.878766 kubelet[1736]: I0213 19:18:20.878586 1736 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:18:20.945167 kubelet[1736]: I0213 19:18:20.945071 1736 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:18:20.953848 kubelet[1736]: E0213 19:18:20.953815 1736 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:18:20.953848 kubelet[1736]: I0213 19:18:20.953845 1736 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:18:20.957047 kubelet[1736]: I0213 19:18:20.957001 1736 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:18:20.957878 kubelet[1736]: I0213 19:18:20.957845 1736 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:18:20.958035 kubelet[1736]: I0213 19:18:20.957985 1736 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:18:20.958461 kubelet[1736]: I0213 19:18:20.958015 1736 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.113","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:18:20.958461 kubelet[1736]: I0213 19:18:20.958341 1736 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:18:20.958461 kubelet[1736]: I0213 19:18:20.958352 1736 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:18:20.958657 kubelet[1736]: I0213 19:18:20.958532 1736 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:18:20.961910 kubelet[1736]: I0213 19:18:20.959589 1736 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:18:20.961910 kubelet[1736]: I0213 19:18:20.959620 1736 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:18:20.961910 kubelet[1736]: I0213 19:18:20.959771 1736 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:18:20.961910 kubelet[1736]: I0213 19:18:20.959781 1736 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:18:20.961910 kubelet[1736]: E0213 19:18:20.959987 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:20.961910 kubelet[1736]: E0213 19:18:20.960020 1736 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:20.966881 kubelet[1736]: I0213 19:18:20.966850 1736 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:18:20.968632 kubelet[1736]: I0213 19:18:20.968561 1736 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:18:20.968756 kubelet[1736]: W0213 19:18:20.968731 1736 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:18:20.971382 kubelet[1736]: I0213 19:18:20.971267 1736 server.go:1269] "Started kubelet" Feb 13 19:18:20.974065 kubelet[1736]: I0213 19:18:20.974007 1736 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:18:20.977568 kubelet[1736]: I0213 19:18:20.975094 1736 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:18:20.977568 kubelet[1736]: I0213 19:18:20.975453 1736 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:18:20.977568 kubelet[1736]: I0213 19:18:20.975606 1736 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:18:20.977568 kubelet[1736]: I0213 19:18:20.976132 1736 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:18:20.977568 kubelet[1736]: I0213 19:18:20.977110 1736 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:18:20.979218 kubelet[1736]: E0213 19:18:20.979186 1736 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:18:20.979524 kubelet[1736]: I0213 19:18:20.979500 1736 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:18:20.979648 kubelet[1736]: I0213 19:18:20.979630 1736 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:18:20.979700 kubelet[1736]: I0213 19:18:20.979687 1736 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:18:20.981055 kubelet[1736]: I0213 19:18:20.981033 1736 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:18:20.981345 kubelet[1736]: I0213 19:18:20.981320 1736 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:18:20.983321 kubelet[1736]: E0213 19:18:20.983291 1736 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Feb 13 19:18:20.984453 kubelet[1736]: I0213 19:18:20.984431 1736 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:18:20.990832 kubelet[1736]: E0213 19:18:20.990781 1736 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.113\" not found" node="10.0.0.113" Feb 13 19:18:20.997259 kubelet[1736]: I0213 19:18:20.997213 1736 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:18:20.997259 kubelet[1736]: I0213 19:18:20.997235 1736 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:18:20.997259 kubelet[1736]: I0213 19:18:20.997255 1736 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:18:21.067327 kubelet[1736]: I0213 19:18:21.065729 1736 policy_none.go:49] "None policy: Start" Feb 13 19:18:21.067327 kubelet[1736]: I0213 19:18:21.066526 1736 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:18:21.067327 kubelet[1736]: I0213 19:18:21.066549 1736 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:18:21.076465 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:18:21.083975 kubelet[1736]: E0213 19:18:21.083928 1736 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Feb 13 19:18:21.088776 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:18:21.091552 kubelet[1736]: I0213 19:18:21.091496 1736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:18:21.092074 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:18:21.093232 kubelet[1736]: I0213 19:18:21.093108 1736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:18:21.093232 kubelet[1736]: I0213 19:18:21.093134 1736 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:18:21.093232 kubelet[1736]: I0213 19:18:21.093153 1736 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:18:21.093342 kubelet[1736]: E0213 19:18:21.093253 1736 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:18:21.099272 kubelet[1736]: I0213 19:18:21.098714 1736 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:18:21.099272 kubelet[1736]: I0213 19:18:21.098978 1736 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:18:21.099272 kubelet[1736]: I0213 19:18:21.098990 1736 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:18:21.099272 kubelet[1736]: I0213 19:18:21.099208 1736 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:18:21.101268 kubelet[1736]: E0213 19:18:21.101246 1736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.113\" not found" Feb 13 19:18:21.200023 kubelet[1736]: I0213 19:18:21.199914 1736 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.113" Feb 13 19:18:21.205224 kubelet[1736]: I0213 19:18:21.205194 1736 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.113" Feb 13 19:18:21.314354 kubelet[1736]: I0213 19:18:21.314325 1736 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:18:21.314853 containerd[1436]: time="2025-02-13T19:18:21.314815852Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:18:21.315144 kubelet[1736]: I0213 19:18:21.315001 1736 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:18:21.417100 sudo[1611]: pam_unix(sudo:session): session closed for user root Feb 13 19:18:21.418580 sshd[1610]: Connection closed by 10.0.0.1 port 50024 Feb 13 19:18:21.418907 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:21.421651 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:50024.service: Deactivated successfully. Feb 13 19:18:21.423233 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:18:21.425020 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:18:21.426082 systemd-logind[1417]: Removed session 7. Feb 13 19:18:21.880922 kubelet[1736]: I0213 19:18:21.880880 1736 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:18:21.881422 kubelet[1736]: W0213 19:18:21.881088 1736 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:18:21.881422 kubelet[1736]: W0213 19:18:21.881097 1736 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:18:21.881422 kubelet[1736]: W0213 19:18:21.881119 1736 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:18:21.960993 kubelet[1736]: I0213 19:18:21.960946 1736 apiserver.go:52] "Watching apiserver" Feb 13 19:18:21.960993 kubelet[1736]: E0213 19:18:21.960953 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:21.976097 systemd[1]: Created slice kubepods-besteffort-podd311e1d7_74f3_4fbd_bf42_86960e9ed996.slice - libcontainer container kubepods-besteffort-podd311e1d7_74f3_4fbd_bf42_86960e9ed996.slice. Feb 13 19:18:21.980027 kubelet[1736]: I0213 19:18:21.979985 1736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:18:21.985823 kubelet[1736]: I0213 19:18:21.985522 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-xtables-lock\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.985823 kubelet[1736]: I0213 19:18:21.985558 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae29f47a-3dbc-44dc-b015-07e5e016033e-clustermesh-secrets\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.985823 kubelet[1736]: I0213 19:18:21.985575 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-host-proc-sys-net\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.985823 kubelet[1736]: I0213 19:18:21.985590 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae29f47a-3dbc-44dc-b015-07e5e016033e-hubble-tls\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.985823 kubelet[1736]: I0213 19:18:21.985605 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-hostproc\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.985823 kubelet[1736]: I0213 19:18:21.985620 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-etc-cni-netd\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.986039 kubelet[1736]: I0213 19:18:21.985633 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-lib-modules\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.986039 kubelet[1736]: I0213 19:18:21.985653 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-host-proc-sys-kernel\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.986039 kubelet[1736]: I0213 19:18:21.985678 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d311e1d7-74f3-4fbd-bf42-86960e9ed996-xtables-lock\") pod \"kube-proxy-jn28l\" (UID: \"d311e1d7-74f3-4fbd-bf42-86960e9ed996\") " pod="kube-system/kube-proxy-jn28l" Feb 13 19:18:21.986039 kubelet[1736]: I0213 19:18:21.985713 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-run\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.986039 kubelet[1736]: I0213 19:18:21.985761 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-bpf-maps\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.986039 kubelet[1736]: I0213 19:18:21.985779 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-cgroup\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.986161 kubelet[1736]: I0213 19:18:21.985800 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cni-path\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.986161 kubelet[1736]: I0213 19:18:21.985827 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2864\" (UniqueName: \"kubernetes.io/projected/d311e1d7-74f3-4fbd-bf42-86960e9ed996-kube-api-access-d2864\") pod \"kube-proxy-jn28l\" (UID: \"d311e1d7-74f3-4fbd-bf42-86960e9ed996\") " pod="kube-system/kube-proxy-jn28l" Feb 13 19:18:21.986161 kubelet[1736]: I0213 19:18:21.985854 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-config-path\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.986161 kubelet[1736]: I0213 19:18:21.985878 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zklfz\" (UniqueName: \"kubernetes.io/projected/ae29f47a-3dbc-44dc-b015-07e5e016033e-kube-api-access-zklfz\") pod \"cilium-2fj2l\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " pod="kube-system/cilium-2fj2l" Feb 13 19:18:21.986161 kubelet[1736]: I0213 19:18:21.985893 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d311e1d7-74f3-4fbd-bf42-86960e9ed996-kube-proxy\") pod \"kube-proxy-jn28l\" (UID: \"d311e1d7-74f3-4fbd-bf42-86960e9ed996\") " pod="kube-system/kube-proxy-jn28l" Feb 13 19:18:21.986260 kubelet[1736]: I0213 19:18:21.986002 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d311e1d7-74f3-4fbd-bf42-86960e9ed996-lib-modules\") pod \"kube-proxy-jn28l\" (UID: \"d311e1d7-74f3-4fbd-bf42-86960e9ed996\") " pod="kube-system/kube-proxy-jn28l" Feb 13 19:18:21.987782 systemd[1]: Created slice kubepods-burstable-podae29f47a_3dbc_44dc_b015_07e5e016033e.slice - libcontainer container kubepods-burstable-podae29f47a_3dbc_44dc_b015_07e5e016033e.slice. Feb 13 19:18:22.286126 kubelet[1736]: E0213 19:18:22.286007 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:22.288319 containerd[1436]: time="2025-02-13T19:18:22.288119921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jn28l,Uid:d311e1d7-74f3-4fbd-bf42-86960e9ed996,Namespace:kube-system,Attempt:0,}" Feb 13 19:18:22.296253 kubelet[1736]: E0213 19:18:22.296215 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:22.296840 containerd[1436]: time="2025-02-13T19:18:22.296804403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2fj2l,Uid:ae29f47a-3dbc-44dc-b015-07e5e016033e,Namespace:kube-system,Attempt:0,}" Feb 13 19:18:22.786777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1406236374.mount: Deactivated successfully. Feb 13 19:18:22.793083 containerd[1436]: time="2025-02-13T19:18:22.793027988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:18:22.795568 containerd[1436]: time="2025-02-13T19:18:22.795520871Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:18:22.796193 containerd[1436]: time="2025-02-13T19:18:22.796143723Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:18:22.796781 containerd[1436]: time="2025-02-13T19:18:22.796755160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:18:22.797251 containerd[1436]: time="2025-02-13T19:18:22.797218846Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:18:22.801111 containerd[1436]: time="2025-02-13T19:18:22.801057969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:18:22.802083 containerd[1436]: time="2025-02-13T19:18:22.802036229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 505.124904ms" Feb 13 19:18:22.802778 containerd[1436]: time="2025-02-13T19:18:22.802677040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 514.458499ms" Feb 13 19:18:22.902380 containerd[1436]: time="2025-02-13T19:18:22.901995625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:22.902380 containerd[1436]: time="2025-02-13T19:18:22.902086103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:22.902380 containerd[1436]: time="2025-02-13T19:18:22.902102147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:22.902380 containerd[1436]: time="2025-02-13T19:18:22.902187357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:22.902608 containerd[1436]: time="2025-02-13T19:18:22.902544122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:22.902608 containerd[1436]: time="2025-02-13T19:18:22.902593612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:22.902668 containerd[1436]: time="2025-02-13T19:18:22.902605984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:22.902914 containerd[1436]: time="2025-02-13T19:18:22.902680179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:22.961331 kubelet[1736]: E0213 19:18:22.961287 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:22.995920 systemd[1]: Started cri-containerd-70d9b5ab4f7e6b9ceecdf27436c7c3855a0750b3a4479de6467c4ac12f644a25.scope - libcontainer container 70d9b5ab4f7e6b9ceecdf27436c7c3855a0750b3a4479de6467c4ac12f644a25. Feb 13 19:18:22.998738 systemd[1]: Started cri-containerd-0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8.scope - libcontainer container 0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8. Feb 13 19:18:23.018345 containerd[1436]: time="2025-02-13T19:18:23.018190707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jn28l,Uid:d311e1d7-74f3-4fbd-bf42-86960e9ed996,Namespace:kube-system,Attempt:0,} returns sandbox id \"70d9b5ab4f7e6b9ceecdf27436c7c3855a0750b3a4479de6467c4ac12f644a25\"" Feb 13 19:18:23.021532 kubelet[1736]: E0213 19:18:23.021500 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:23.022737 containerd[1436]: time="2025-02-13T19:18:23.022704317Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:18:23.024653 containerd[1436]: time="2025-02-13T19:18:23.024591693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2fj2l,Uid:ae29f47a-3dbc-44dc-b015-07e5e016033e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\"" Feb 13 19:18:23.025499 kubelet[1736]: E0213 19:18:23.025467 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:23.933491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount230624890.mount: Deactivated successfully. Feb 13 19:18:23.961833 kubelet[1736]: E0213 19:18:23.961802 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:24.209070 containerd[1436]: time="2025-02-13T19:18:24.208928402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:24.209600 containerd[1436]: time="2025-02-13T19:18:24.209561882Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 19:18:24.210731 containerd[1436]: time="2025-02-13T19:18:24.210688834Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:24.213304 containerd[1436]: time="2025-02-13T19:18:24.213265149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:24.213976 containerd[1436]: time="2025-02-13T19:18:24.213943221Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.191199026s" Feb 13 19:18:24.214047 containerd[1436]: time="2025-02-13T19:18:24.213979150Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:18:24.215185 containerd[1436]: time="2025-02-13T19:18:24.215027697Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:18:24.216483 containerd[1436]: time="2025-02-13T19:18:24.216449153Z" level=info msg="CreateContainer within sandbox \"70d9b5ab4f7e6b9ceecdf27436c7c3855a0750b3a4479de6467c4ac12f644a25\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:18:24.237123 containerd[1436]: time="2025-02-13T19:18:24.236995034Z" level=info msg="CreateContainer within sandbox \"70d9b5ab4f7e6b9ceecdf27436c7c3855a0750b3a4479de6467c4ac12f644a25\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"39dbf7d934afc84a86925c56a157f409ccd2eba8cede669329275762dc1f0bff\"" Feb 13 19:18:24.237670 containerd[1436]: time="2025-02-13T19:18:24.237641408Z" level=info msg="StartContainer for \"39dbf7d934afc84a86925c56a157f409ccd2eba8cede669329275762dc1f0bff\"" Feb 13 19:18:24.267969 systemd[1]: Started cri-containerd-39dbf7d934afc84a86925c56a157f409ccd2eba8cede669329275762dc1f0bff.scope - libcontainer container 39dbf7d934afc84a86925c56a157f409ccd2eba8cede669329275762dc1f0bff. Feb 13 19:18:24.294909 containerd[1436]: time="2025-02-13T19:18:24.294455776Z" level=info msg="StartContainer for \"39dbf7d934afc84a86925c56a157f409ccd2eba8cede669329275762dc1f0bff\" returns successfully" Feb 13 19:18:24.962351 kubelet[1736]: E0213 19:18:24.962297 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:25.103162 kubelet[1736]: E0213 19:18:25.103121 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:25.111824 kubelet[1736]: I0213 19:18:25.111711 1736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jn28l" podStartSLOduration=2.918960356 podStartE2EDuration="4.111694422s" podCreationTimestamp="2025-02-13 19:18:21 +0000 UTC" firstStartedPulling="2025-02-13 19:18:23.022168117 +0000 UTC m=+2.628127776" lastFinishedPulling="2025-02-13 19:18:24.214902183 +0000 UTC m=+3.820861842" observedRunningTime="2025-02-13 19:18:25.111580072 +0000 UTC m=+4.717539731" watchObservedRunningTime="2025-02-13 19:18:25.111694422 +0000 UTC m=+4.717654081" Feb 13 19:18:25.962784 kubelet[1736]: E0213 19:18:25.962712 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:26.104291 kubelet[1736]: E0213 19:18:26.104224 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:26.963644 kubelet[1736]: E0213 19:18:26.963592 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:27.963808 kubelet[1736]: E0213 19:18:27.963721 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:28.964594 kubelet[1736]: E0213 19:18:28.964530 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:29.965537 kubelet[1736]: E0213 19:18:29.965473 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:30.966202 kubelet[1736]: E0213 19:18:30.966091 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:31.966518 kubelet[1736]: E0213 19:18:31.966475 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:32.967867 kubelet[1736]: E0213 19:18:32.967822 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:32.979835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1710115048.mount: Deactivated successfully. Feb 13 19:18:33.968407 kubelet[1736]: E0213 19:18:33.968364 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:34.223124 containerd[1436]: time="2025-02-13T19:18:34.223015930Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:34.224135 containerd[1436]: time="2025-02-13T19:18:34.224070728Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:18:34.224795 containerd[1436]: time="2025-02-13T19:18:34.224710431Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:34.226870 containerd[1436]: time="2025-02-13T19:18:34.226835890Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.011762312s" Feb 13 19:18:34.226945 containerd[1436]: time="2025-02-13T19:18:34.226874011Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:18:34.228853 containerd[1436]: time="2025-02-13T19:18:34.228726510Z" level=info msg="CreateContainer within sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:18:34.241301 containerd[1436]: time="2025-02-13T19:18:34.241206623Z" level=info msg="CreateContainer within sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c\"" Feb 13 19:18:34.241986 containerd[1436]: time="2025-02-13T19:18:34.241654163Z" level=info msg="StartContainer for \"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c\"" Feb 13 19:18:34.269973 systemd[1]: Started cri-containerd-d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c.scope - libcontainer container d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c. Feb 13 19:18:34.291041 containerd[1436]: time="2025-02-13T19:18:34.290976269Z" level=info msg="StartContainer for \"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c\" returns successfully" Feb 13 19:18:34.326842 systemd[1]: cri-containerd-d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c.scope: Deactivated successfully. Feb 13 19:18:34.457171 containerd[1436]: time="2025-02-13T19:18:34.457111021Z" level=info msg="shim disconnected" id=d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c namespace=k8s.io Feb 13 19:18:34.457171 containerd[1436]: time="2025-02-13T19:18:34.457161609Z" level=warning msg="cleaning up after shim disconnected" id=d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c namespace=k8s.io Feb 13 19:18:34.457171 containerd[1436]: time="2025-02-13T19:18:34.457170440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:34.969061 kubelet[1736]: E0213 19:18:34.969012 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:35.121708 kubelet[1736]: E0213 19:18:35.121537 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:35.123405 containerd[1436]: time="2025-02-13T19:18:35.123368736Z" level=info msg="CreateContainer within sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:18:35.135461 containerd[1436]: time="2025-02-13T19:18:35.135376984Z" level=info msg="CreateContainer within sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563\"" Feb 13 19:18:35.135843 containerd[1436]: time="2025-02-13T19:18:35.135817640Z" level=info msg="StartContainer for \"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563\"" Feb 13 19:18:35.163934 systemd[1]: Started cri-containerd-f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563.scope - libcontainer container f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563. Feb 13 19:18:35.185463 containerd[1436]: time="2025-02-13T19:18:35.185418124Z" level=info msg="StartContainer for \"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563\" returns successfully" Feb 13 19:18:35.197588 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:18:35.197811 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:35.197869 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:18:35.205100 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:18:35.205256 systemd[1]: cri-containerd-f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563.scope: Deactivated successfully. Feb 13 19:18:35.214927 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:35.229302 containerd[1436]: time="2025-02-13T19:18:35.229180705Z" level=info msg="shim disconnected" id=f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563 namespace=k8s.io Feb 13 19:18:35.229302 containerd[1436]: time="2025-02-13T19:18:35.229235772Z" level=warning msg="cleaning up after shim disconnected" id=f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563 namespace=k8s.io Feb 13 19:18:35.229302 containerd[1436]: time="2025-02-13T19:18:35.229252475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:35.236099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c-rootfs.mount: Deactivated successfully. Feb 13 19:18:35.969137 kubelet[1736]: E0213 19:18:35.969085 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:36.124624 kubelet[1736]: E0213 19:18:36.124594 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:36.126252 containerd[1436]: time="2025-02-13T19:18:36.126220180Z" level=info msg="CreateContainer within sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:18:36.139829 containerd[1436]: time="2025-02-13T19:18:36.139782709Z" level=info msg="CreateContainer within sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881\"" Feb 13 19:18:36.140292 containerd[1436]: time="2025-02-13T19:18:36.140257001Z" level=info msg="StartContainer for \"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881\"" Feb 13 19:18:36.168900 systemd[1]: Started cri-containerd-3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881.scope - libcontainer container 3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881. Feb 13 19:18:36.195637 containerd[1436]: time="2025-02-13T19:18:36.195590420Z" level=info msg="StartContainer for \"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881\" returns successfully" Feb 13 19:18:36.216134 systemd[1]: cri-containerd-3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881.scope: Deactivated successfully. Feb 13 19:18:36.235791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881-rootfs.mount: Deactivated successfully. Feb 13 19:18:36.245544 containerd[1436]: time="2025-02-13T19:18:36.245471756Z" level=info msg="shim disconnected" id=3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881 namespace=k8s.io Feb 13 19:18:36.245544 containerd[1436]: time="2025-02-13T19:18:36.245536378Z" level=warning msg="cleaning up after shim disconnected" id=3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881 namespace=k8s.io Feb 13 19:18:36.245544 containerd[1436]: time="2025-02-13T19:18:36.245546089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:36.969926 kubelet[1736]: E0213 19:18:36.969846 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:37.128442 kubelet[1736]: E0213 19:18:37.128400 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:37.130229 containerd[1436]: time="2025-02-13T19:18:37.130195326Z" level=info msg="CreateContainer within sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:18:37.144159 containerd[1436]: time="2025-02-13T19:18:37.144108604Z" level=info msg="CreateContainer within sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8\"" Feb 13 19:18:37.144767 containerd[1436]: time="2025-02-13T19:18:37.144729719Z" level=info msg="StartContainer for \"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8\"" Feb 13 19:18:37.171928 systemd[1]: Started cri-containerd-b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8.scope - libcontainer container b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8. Feb 13 19:18:37.190150 systemd[1]: cri-containerd-b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8.scope: Deactivated successfully. Feb 13 19:18:37.191858 containerd[1436]: time="2025-02-13T19:18:37.191577033Z" level=info msg="StartContainer for \"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8\" returns successfully" Feb 13 19:18:37.210052 containerd[1436]: time="2025-02-13T19:18:37.209996342Z" level=info msg="shim disconnected" id=b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8 namespace=k8s.io Feb 13 19:18:37.210052 containerd[1436]: time="2025-02-13T19:18:37.210048338Z" level=warning msg="cleaning up after shim disconnected" id=b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8 namespace=k8s.io Feb 13 19:18:37.210052 containerd[1436]: time="2025-02-13T19:18:37.210056930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:37.235918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8-rootfs.mount: Deactivated successfully. Feb 13 19:18:37.970067 kubelet[1736]: E0213 19:18:37.970035 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:38.131848 kubelet[1736]: E0213 19:18:38.131817 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:38.133603 containerd[1436]: time="2025-02-13T19:18:38.133570256Z" level=info msg="CreateContainer within sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:18:38.149813 containerd[1436]: time="2025-02-13T19:18:38.149770497Z" level=info msg="CreateContainer within sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\"" Feb 13 19:18:38.150271 containerd[1436]: time="2025-02-13T19:18:38.150219501Z" level=info msg="StartContainer for \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\"" Feb 13 19:18:38.176930 systemd[1]: Started cri-containerd-3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8.scope - libcontainer container 3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8. Feb 13 19:18:38.206433 containerd[1436]: time="2025-02-13T19:18:38.206368521Z" level=info msg="StartContainer for \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\" returns successfully" Feb 13 19:18:38.343150 kubelet[1736]: I0213 19:18:38.343041 1736 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:18:38.725775 kernel: Initializing XFRM netlink socket Feb 13 19:18:38.971173 kubelet[1736]: E0213 19:18:38.971121 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:39.136195 kubelet[1736]: E0213 19:18:39.136151 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:39.151999 kubelet[1736]: I0213 19:18:39.151940 1736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2fj2l" podStartSLOduration=6.950239428 podStartE2EDuration="18.151922694s" podCreationTimestamp="2025-02-13 19:18:21 +0000 UTC" firstStartedPulling="2025-02-13 19:18:23.025959276 +0000 UTC m=+2.631918935" lastFinishedPulling="2025-02-13 19:18:34.227642582 +0000 UTC m=+13.833602201" observedRunningTime="2025-02-13 19:18:39.151512838 +0000 UTC m=+18.757472497" watchObservedRunningTime="2025-02-13 19:18:39.151922694 +0000 UTC m=+18.757882353" Feb 13 19:18:39.972063 kubelet[1736]: E0213 19:18:39.971988 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:40.137491 kubelet[1736]: E0213 19:18:40.137458 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:40.345654 systemd-networkd[1372]: cilium_host: Link UP Feb 13 19:18:40.345800 systemd-networkd[1372]: cilium_net: Link UP Feb 13 19:18:40.346700 systemd-networkd[1372]: cilium_net: Gained carrier Feb 13 19:18:40.346952 systemd-networkd[1372]: cilium_host: Gained carrier Feb 13 19:18:40.347059 systemd-networkd[1372]: cilium_net: Gained IPv6LL Feb 13 19:18:40.347183 systemd-networkd[1372]: cilium_host: Gained IPv6LL Feb 13 19:18:40.426556 systemd-networkd[1372]: cilium_vxlan: Link UP Feb 13 19:18:40.426567 systemd-networkd[1372]: cilium_vxlan: Gained carrier Feb 13 19:18:40.720776 kernel: NET: Registered PF_ALG protocol family Feb 13 19:18:40.959922 kubelet[1736]: E0213 19:18:40.959885 1736 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:40.972900 kubelet[1736]: E0213 19:18:40.972855 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:41.139234 kubelet[1736]: E0213 19:18:41.139206 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:41.261925 systemd-networkd[1372]: lxc_health: Link UP Feb 13 19:18:41.272434 systemd-networkd[1372]: lxc_health: Gained carrier Feb 13 19:18:41.973334 kubelet[1736]: E0213 19:18:41.973279 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:42.074829 systemd-networkd[1372]: cilium_vxlan: Gained IPv6LL Feb 13 19:18:42.140374 kubelet[1736]: E0213 19:18:42.140332 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:42.640662 systemd[1]: Created slice kubepods-besteffort-pod0656cca6_b9c1_4f1c_8edb_dd891cbb496c.slice - libcontainer container kubepods-besteffort-pod0656cca6_b9c1_4f1c_8edb_dd891cbb496c.slice. Feb 13 19:18:42.718047 kubelet[1736]: I0213 19:18:42.718004 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtfpn\" (UniqueName: \"kubernetes.io/projected/0656cca6-b9c1-4f1c-8edb-dd891cbb496c-kube-api-access-gtfpn\") pod \"nginx-deployment-8587fbcb89-p477b\" (UID: \"0656cca6-b9c1-4f1c-8edb-dd891cbb496c\") " pod="default/nginx-deployment-8587fbcb89-p477b" Feb 13 19:18:42.944702 containerd[1436]: time="2025-02-13T19:18:42.944299135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-p477b,Uid:0656cca6-b9c1-4f1c-8edb-dd891cbb496c,Namespace:default,Attempt:0,}" Feb 13 19:18:42.975426 kubelet[1736]: E0213 19:18:42.975373 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:43.001089 systemd-networkd[1372]: lxcc588c02cbfb1: Link UP Feb 13 19:18:43.007810 kernel: eth0: renamed from tmpde936 Feb 13 19:18:43.014232 systemd-networkd[1372]: lxcc588c02cbfb1: Gained carrier Feb 13 19:18:43.141820 kubelet[1736]: E0213 19:18:43.141779 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:43.163229 systemd-networkd[1372]: lxc_health: Gained IPv6LL Feb 13 19:18:43.976165 kubelet[1736]: E0213 19:18:43.976114 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:44.143016 kubelet[1736]: E0213 19:18:44.142956 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:18:44.188156 systemd-networkd[1372]: lxcc588c02cbfb1: Gained IPv6LL Feb 13 19:18:44.976884 kubelet[1736]: E0213 19:18:44.976800 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:45.756314 containerd[1436]: time="2025-02-13T19:18:45.756219798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:45.756314 containerd[1436]: time="2025-02-13T19:18:45.756279368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:45.756314 containerd[1436]: time="2025-02-13T19:18:45.756297239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:45.756803 containerd[1436]: time="2025-02-13T19:18:45.756444245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:45.783972 systemd[1]: Started cri-containerd-de936a77410681d51d7a37d2849190ff0bd53888737a1a15ed900873ecd3b641.scope - libcontainer container de936a77410681d51d7a37d2849190ff0bd53888737a1a15ed900873ecd3b641. Feb 13 19:18:45.794750 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:18:45.810078 containerd[1436]: time="2025-02-13T19:18:45.809961656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-p477b,Uid:0656cca6-b9c1-4f1c-8edb-dd891cbb496c,Namespace:default,Attempt:0,} returns sandbox id \"de936a77410681d51d7a37d2849190ff0bd53888737a1a15ed900873ecd3b641\"" Feb 13 19:18:45.812117 containerd[1436]: time="2025-02-13T19:18:45.811914711Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:18:45.977548 kubelet[1736]: E0213 19:18:45.977494 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:46.978305 kubelet[1736]: E0213 19:18:46.978253 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:47.646007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576767073.mount: Deactivated successfully. Feb 13 19:18:47.978597 kubelet[1736]: E0213 19:18:47.978475 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:48.452271 containerd[1436]: time="2025-02-13T19:18:48.452167591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:48.453236 containerd[1436]: time="2025-02-13T19:18:48.453188447Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:18:48.454859 containerd[1436]: time="2025-02-13T19:18:48.454825766Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:48.457819 containerd[1436]: time="2025-02-13T19:18:48.457769183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:48.458918 containerd[1436]: time="2025-02-13T19:18:48.458890357Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 2.646939385s" Feb 13 19:18:48.459136 containerd[1436]: time="2025-02-13T19:18:48.459032778Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:18:48.461300 containerd[1436]: time="2025-02-13T19:18:48.461151818Z" level=info msg="CreateContainer within sandbox \"de936a77410681d51d7a37d2849190ff0bd53888737a1a15ed900873ecd3b641\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:18:48.481739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3275067639.mount: Deactivated successfully. Feb 13 19:18:48.483828 containerd[1436]: time="2025-02-13T19:18:48.483786453Z" level=info msg="CreateContainer within sandbox \"de936a77410681d51d7a37d2849190ff0bd53888737a1a15ed900873ecd3b641\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1d603bef1c75ed39c81ddaabdd31a6ac6077d9f32bdd2da21f4ce402c241950b\"" Feb 13 19:18:48.484681 containerd[1436]: time="2025-02-13T19:18:48.484651574Z" level=info msg="StartContainer for \"1d603bef1c75ed39c81ddaabdd31a6ac6077d9f32bdd2da21f4ce402c241950b\"" Feb 13 19:18:48.516946 systemd[1]: Started cri-containerd-1d603bef1c75ed39c81ddaabdd31a6ac6077d9f32bdd2da21f4ce402c241950b.scope - libcontainer container 1d603bef1c75ed39c81ddaabdd31a6ac6077d9f32bdd2da21f4ce402c241950b. Feb 13 19:18:48.546608 containerd[1436]: time="2025-02-13T19:18:48.546555733Z" level=info msg="StartContainer for \"1d603bef1c75ed39c81ddaabdd31a6ac6077d9f32bdd2da21f4ce402c241950b\" returns successfully" Feb 13 19:18:48.978974 kubelet[1736]: E0213 19:18:48.978926 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:49.161389 kubelet[1736]: I0213 19:18:49.161324 1736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-p477b" podStartSLOduration=4.513078185 podStartE2EDuration="7.161310568s" podCreationTimestamp="2025-02-13 19:18:42 +0000 UTC" firstStartedPulling="2025-02-13 19:18:45.811671953 +0000 UTC m=+25.417631612" lastFinishedPulling="2025-02-13 19:18:48.459904336 +0000 UTC m=+28.065863995" observedRunningTime="2025-02-13 19:18:49.161147832 +0000 UTC m=+28.767107491" watchObservedRunningTime="2025-02-13 19:18:49.161310568 +0000 UTC m=+28.767270187" Feb 13 19:18:49.980071 kubelet[1736]: E0213 19:18:49.980017 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:50.980207 kubelet[1736]: E0213 19:18:50.980172 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:51.981134 kubelet[1736]: E0213 19:18:51.981001 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:52.981823 kubelet[1736]: E0213 19:18:52.981776 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:53.982326 kubelet[1736]: E0213 19:18:53.982270 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:54.919458 systemd[1]: Created slice kubepods-besteffort-pod1628e4d9_7f60_4fed_b729_54ebd7e053c6.slice - libcontainer container kubepods-besteffort-pod1628e4d9_7f60_4fed_b729_54ebd7e053c6.slice. Feb 13 19:18:54.983251 kubelet[1736]: E0213 19:18:54.983202 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:54.999441 kubelet[1736]: I0213 19:18:54.999385 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1628e4d9-7f60-4fed-b729-54ebd7e053c6-data\") pod \"nfs-server-provisioner-0\" (UID: \"1628e4d9-7f60-4fed-b729-54ebd7e053c6\") " pod="default/nfs-server-provisioner-0" Feb 13 19:18:54.999441 kubelet[1736]: I0213 19:18:54.999437 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qwd\" (UniqueName: \"kubernetes.io/projected/1628e4d9-7f60-4fed-b729-54ebd7e053c6-kube-api-access-n9qwd\") pod \"nfs-server-provisioner-0\" (UID: \"1628e4d9-7f60-4fed-b729-54ebd7e053c6\") " pod="default/nfs-server-provisioner-0" Feb 13 19:18:55.222925 containerd[1436]: time="2025-02-13T19:18:55.222804856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1628e4d9-7f60-4fed-b729-54ebd7e053c6,Namespace:default,Attempt:0,}" Feb 13 19:18:55.248988 systemd-networkd[1372]: lxc637fedf98b12: Link UP Feb 13 19:18:55.259140 kernel: eth0: renamed from tmp6833e Feb 13 19:18:55.265424 systemd-networkd[1372]: lxc637fedf98b12: Gained carrier Feb 13 19:18:55.414433 containerd[1436]: time="2025-02-13T19:18:55.414267109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:55.414433 containerd[1436]: time="2025-02-13T19:18:55.414319216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:55.414433 containerd[1436]: time="2025-02-13T19:18:55.414329493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:55.414632 containerd[1436]: time="2025-02-13T19:18:55.414415430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:55.436975 systemd[1]: Started cri-containerd-6833ebcca9220c3b9dd257769c6ebe6250f731ac5d8d4e1f864a89cfcaf7f491.scope - libcontainer container 6833ebcca9220c3b9dd257769c6ebe6250f731ac5d8d4e1f864a89cfcaf7f491. Feb 13 19:18:55.448523 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:18:55.466245 containerd[1436]: time="2025-02-13T19:18:55.466209175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1628e4d9-7f60-4fed-b729-54ebd7e053c6,Namespace:default,Attempt:0,} returns sandbox id \"6833ebcca9220c3b9dd257769c6ebe6250f731ac5d8d4e1f864a89cfcaf7f491\"" Feb 13 19:18:55.467973 containerd[1436]: time="2025-02-13T19:18:55.467838784Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:18:55.984118 kubelet[1736]: E0213 19:18:55.984075 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:56.974098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1328848991.mount: Deactivated successfully. Feb 13 19:18:56.984249 kubelet[1736]: E0213 19:18:56.984214 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:57.115012 systemd-networkd[1372]: lxc637fedf98b12: Gained IPv6LL Feb 13 19:18:57.115906 update_engine[1420]: I20250213 19:18:57.115772 1420 update_attempter.cc:509] Updating boot flags... Feb 13 19:18:57.139128 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2982) Feb 13 19:18:57.180778 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2982) Feb 13 19:18:57.985224 kubelet[1736]: E0213 19:18:57.985181 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:58.440996 containerd[1436]: time="2025-02-13T19:18:58.440857521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:58.441562 containerd[1436]: time="2025-02-13T19:18:58.441515418Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Feb 13 19:18:58.442598 containerd[1436]: time="2025-02-13T19:18:58.442552072Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:58.446504 containerd[1436]: time="2025-02-13T19:18:58.445287116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:58.446504 containerd[1436]: time="2025-02-13T19:18:58.446370720Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 2.978491707s" Feb 13 19:18:58.446504 containerd[1436]: time="2025-02-13T19:18:58.446400754Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:18:58.458117 containerd[1436]: time="2025-02-13T19:18:58.458077130Z" level=info msg="CreateContainer within sandbox \"6833ebcca9220c3b9dd257769c6ebe6250f731ac5d8d4e1f864a89cfcaf7f491\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:18:58.468856 containerd[1436]: time="2025-02-13T19:18:58.468812791Z" level=info msg="CreateContainer within sandbox \"6833ebcca9220c3b9dd257769c6ebe6250f731ac5d8d4e1f864a89cfcaf7f491\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"e7a90fd6e68512056359e9644e585d0058808e2169e1eda00cb79b7113f564bb\"" Feb 13 19:18:58.469298 containerd[1436]: time="2025-02-13T19:18:58.469263613Z" level=info msg="StartContainer for \"e7a90fd6e68512056359e9644e585d0058808e2169e1eda00cb79b7113f564bb\"" Feb 13 19:18:58.547933 systemd[1]: Started cri-containerd-e7a90fd6e68512056359e9644e585d0058808e2169e1eda00cb79b7113f564bb.scope - libcontainer container e7a90fd6e68512056359e9644e585d0058808e2169e1eda00cb79b7113f564bb. Feb 13 19:18:58.588947 containerd[1436]: time="2025-02-13T19:18:58.588899908Z" level=info msg="StartContainer for \"e7a90fd6e68512056359e9644e585d0058808e2169e1eda00cb79b7113f564bb\" returns successfully" Feb 13 19:18:58.986172 kubelet[1736]: E0213 19:18:58.986097 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:59.187564 kubelet[1736]: I0213 19:18:59.187498 1736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.19836181 podStartE2EDuration="5.187482865s" podCreationTimestamp="2025-02-13 19:18:54 +0000 UTC" firstStartedPulling="2025-02-13 19:18:55.467522867 +0000 UTC m=+35.073482526" lastFinishedPulling="2025-02-13 19:18:58.456643922 +0000 UTC m=+38.062603581" observedRunningTime="2025-02-13 19:18:59.185852318 +0000 UTC m=+38.791811937" watchObservedRunningTime="2025-02-13 19:18:59.187482865 +0000 UTC m=+38.793442524" Feb 13 19:18:59.986842 kubelet[1736]: E0213 19:18:59.986789 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:00.959992 kubelet[1736]: E0213 19:19:00.959941 1736 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:00.987461 kubelet[1736]: E0213 19:19:00.987430 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:01.988322 kubelet[1736]: E0213 19:19:01.988271 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:02.988971 kubelet[1736]: E0213 19:19:02.988920 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:03.989535 kubelet[1736]: E0213 19:19:03.989482 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:04.989707 kubelet[1736]: E0213 19:19:04.989660 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:05.990050 kubelet[1736]: E0213 19:19:05.989998 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:06.990385 kubelet[1736]: E0213 19:19:06.990317 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:07.991166 kubelet[1736]: E0213 19:19:07.991115 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:08.565444 systemd[1]: Created slice kubepods-besteffort-pod06e219fe_e173_4061_8531_74f1f06a11ee.slice - libcontainer container kubepods-besteffort-pod06e219fe_e173_4061_8531_74f1f06a11ee.slice. Feb 13 19:19:08.678804 kubelet[1736]: I0213 19:19:08.678733 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8f239190-8633-4844-9b85-978d536a302b\" (UniqueName: \"kubernetes.io/nfs/06e219fe-e173-4061-8531-74f1f06a11ee-pvc-8f239190-8633-4844-9b85-978d536a302b\") pod \"test-pod-1\" (UID: \"06e219fe-e173-4061-8531-74f1f06a11ee\") " pod="default/test-pod-1" Feb 13 19:19:08.678804 kubelet[1736]: I0213 19:19:08.678802 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqw6f\" (UniqueName: \"kubernetes.io/projected/06e219fe-e173-4061-8531-74f1f06a11ee-kube-api-access-lqw6f\") pod \"test-pod-1\" (UID: \"06e219fe-e173-4061-8531-74f1f06a11ee\") " pod="default/test-pod-1" Feb 13 19:19:08.802779 kernel: FS-Cache: Loaded Feb 13 19:19:08.827166 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:19:08.827367 kernel: RPC: Registered udp transport module. Feb 13 19:19:08.827391 kernel: RPC: Registered tcp transport module. Feb 13 19:19:08.827409 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:19:08.827794 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:19:08.991555 kubelet[1736]: E0213 19:19:08.991496 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:09.009897 kernel: NFS: Registering the id_resolver key type Feb 13 19:19:09.009976 kernel: Key type id_resolver registered Feb 13 19:19:09.009999 kernel: Key type id_legacy registered Feb 13 19:19:09.034690 nfsidmap[3154]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:19:09.038532 nfsidmap[3157]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:19:09.169111 containerd[1436]: time="2025-02-13T19:19:09.168813644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:06e219fe-e173-4061-8531-74f1f06a11ee,Namespace:default,Attempt:0,}" Feb 13 19:19:09.194497 systemd-networkd[1372]: lxc23e53edb30ae: Link UP Feb 13 19:19:09.202771 kernel: eth0: renamed from tmpbd5f0 Feb 13 19:19:09.212144 systemd-networkd[1372]: lxc23e53edb30ae: Gained carrier Feb 13 19:19:09.386087 containerd[1436]: time="2025-02-13T19:19:09.386000021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:09.386087 containerd[1436]: time="2025-02-13T19:19:09.386062895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:09.386087 containerd[1436]: time="2025-02-13T19:19:09.386075253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:09.386259 containerd[1436]: time="2025-02-13T19:19:09.386152125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:09.423899 systemd[1]: Started cri-containerd-bd5f0e259dbebc8144d1ad321ef1a83c63e8f186dae161c62bf1c1dc0c0acb52.scope - libcontainer container bd5f0e259dbebc8144d1ad321ef1a83c63e8f186dae161c62bf1c1dc0c0acb52. Feb 13 19:19:09.433320 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:19:09.448909 containerd[1436]: time="2025-02-13T19:19:09.448862968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:06e219fe-e173-4061-8531-74f1f06a11ee,Namespace:default,Attempt:0,} returns sandbox id \"bd5f0e259dbebc8144d1ad321ef1a83c63e8f186dae161c62bf1c1dc0c0acb52\"" Feb 13 19:19:09.450327 containerd[1436]: time="2025-02-13T19:19:09.450300174Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:19:09.740094 containerd[1436]: time="2025-02-13T19:19:09.740047060Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:09.740606 containerd[1436]: time="2025-02-13T19:19:09.740564724Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:19:09.743765 containerd[1436]: time="2025-02-13T19:19:09.743719386Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 293.390335ms" Feb 13 19:19:09.743887 containerd[1436]: time="2025-02-13T19:19:09.743776460Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:19:09.745731 containerd[1436]: time="2025-02-13T19:19:09.745625062Z" level=info msg="CreateContainer within sandbox \"bd5f0e259dbebc8144d1ad321ef1a83c63e8f186dae161c62bf1c1dc0c0acb52\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:19:09.755204 containerd[1436]: time="2025-02-13T19:19:09.755165440Z" level=info msg="CreateContainer within sandbox \"bd5f0e259dbebc8144d1ad321ef1a83c63e8f186dae161c62bf1c1dc0c0acb52\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5fcb474267529d9bcd080ad15cd1d3013edbf33bd7cabb2c182ca5fbc84d2b17\"" Feb 13 19:19:09.756682 containerd[1436]: time="2025-02-13T19:19:09.755840288Z" level=info msg="StartContainer for \"5fcb474267529d9bcd080ad15cd1d3013edbf33bd7cabb2c182ca5fbc84d2b17\"" Feb 13 19:19:09.794891 systemd[1]: Started cri-containerd-5fcb474267529d9bcd080ad15cd1d3013edbf33bd7cabb2c182ca5fbc84d2b17.scope - libcontainer container 5fcb474267529d9bcd080ad15cd1d3013edbf33bd7cabb2c182ca5fbc84d2b17. Feb 13 19:19:09.815978 containerd[1436]: time="2025-02-13T19:19:09.815938331Z" level=info msg="StartContainer for \"5fcb474267529d9bcd080ad15cd1d3013edbf33bd7cabb2c182ca5fbc84d2b17\" returns successfully" Feb 13 19:19:09.991897 kubelet[1736]: E0213 19:19:09.991713 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:10.205895 kubelet[1736]: I0213 19:19:10.205832 1736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.911441831 podStartE2EDuration="15.20581582s" podCreationTimestamp="2025-02-13 19:18:55 +0000 UTC" firstStartedPulling="2025-02-13 19:19:09.450045202 +0000 UTC m=+49.056004861" lastFinishedPulling="2025-02-13 19:19:09.744419191 +0000 UTC m=+49.350378850" observedRunningTime="2025-02-13 19:19:10.205185723 +0000 UTC m=+49.811145342" watchObservedRunningTime="2025-02-13 19:19:10.20581582 +0000 UTC m=+49.811775479" Feb 13 19:19:10.991868 kubelet[1736]: E0213 19:19:10.991808 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:11.130871 systemd-networkd[1372]: lxc23e53edb30ae: Gained IPv6LL Feb 13 19:19:11.992234 kubelet[1736]: E0213 19:19:11.992174 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:12.993263 kubelet[1736]: E0213 19:19:12.993208 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:13.586726 containerd[1436]: time="2025-02-13T19:19:13.586652710Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:19:13.593927 containerd[1436]: time="2025-02-13T19:19:13.593889712Z" level=info msg="StopContainer for \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\" with timeout 2 (s)" Feb 13 19:19:13.594180 containerd[1436]: time="2025-02-13T19:19:13.594161769Z" level=info msg="Stop container \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\" with signal terminated" Feb 13 19:19:13.599660 systemd-networkd[1372]: lxc_health: Link DOWN Feb 13 19:19:13.599666 systemd-networkd[1372]: lxc_health: Lost carrier Feb 13 19:19:13.657867 systemd[1]: cri-containerd-3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8.scope: Deactivated successfully. Feb 13 19:19:13.658136 systemd[1]: cri-containerd-3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8.scope: Consumed 6.453s CPU time. Feb 13 19:19:13.675899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8-rootfs.mount: Deactivated successfully. Feb 13 19:19:13.685524 containerd[1436]: time="2025-02-13T19:19:13.685449336Z" level=info msg="shim disconnected" id=3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8 namespace=k8s.io Feb 13 19:19:13.685524 containerd[1436]: time="2025-02-13T19:19:13.685519610Z" level=warning msg="cleaning up after shim disconnected" id=3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8 namespace=k8s.io Feb 13 19:19:13.685524 containerd[1436]: time="2025-02-13T19:19:13.685530330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:13.699708 containerd[1436]: time="2025-02-13T19:19:13.699663720Z" level=info msg="StopContainer for \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\" returns successfully" Feb 13 19:19:13.700518 containerd[1436]: time="2025-02-13T19:19:13.700480493Z" level=info msg="StopPodSandbox for \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\"" Feb 13 19:19:13.704016 containerd[1436]: time="2025-02-13T19:19:13.703969604Z" level=info msg="Container to stop \"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:19:13.704016 containerd[1436]: time="2025-02-13T19:19:13.704010041Z" level=info msg="Container to stop \"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:19:13.704102 containerd[1436]: time="2025-02-13T19:19:13.704020800Z" level=info msg="Container to stop \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:19:13.704102 containerd[1436]: time="2025-02-13T19:19:13.704030319Z" level=info msg="Container to stop \"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:19:13.704102 containerd[1436]: time="2025-02-13T19:19:13.704047397Z" level=info msg="Container to stop \"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:19:13.705540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8-shm.mount: Deactivated successfully. Feb 13 19:19:13.709443 systemd[1]: cri-containerd-0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8.scope: Deactivated successfully. Feb 13 19:19:13.728269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8-rootfs.mount: Deactivated successfully. Feb 13 19:19:13.733459 containerd[1436]: time="2025-02-13T19:19:13.733348253Z" level=info msg="shim disconnected" id=0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8 namespace=k8s.io Feb 13 19:19:13.733459 containerd[1436]: time="2025-02-13T19:19:13.733406528Z" level=warning msg="cleaning up after shim disconnected" id=0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8 namespace=k8s.io Feb 13 19:19:13.733459 containerd[1436]: time="2025-02-13T19:19:13.733422127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:13.744722 containerd[1436]: time="2025-02-13T19:19:13.744647038Z" level=info msg="TearDown network for sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" successfully" Feb 13 19:19:13.744722 containerd[1436]: time="2025-02-13T19:19:13.744715753Z" level=info msg="StopPodSandbox for \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" returns successfully" Feb 13 19:19:13.811006 kubelet[1736]: I0213 19:19:13.810960 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-run\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811006 kubelet[1736]: I0213 19:19:13.811006 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cni-path\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811202 kubelet[1736]: I0213 19:19:13.811036 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae29f47a-3dbc-44dc-b015-07e5e016033e-clustermesh-secrets\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811202 kubelet[1736]: I0213 19:19:13.811055 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-host-proc-sys-net\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811202 kubelet[1736]: I0213 19:19:13.811077 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae29f47a-3dbc-44dc-b015-07e5e016033e-hubble-tls\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811202 kubelet[1736]: I0213 19:19:13.811092 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-etc-cni-netd\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811202 kubelet[1736]: I0213 19:19:13.811111 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-config-path\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811202 kubelet[1736]: I0213 19:19:13.811129 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zklfz\" (UniqueName: \"kubernetes.io/projected/ae29f47a-3dbc-44dc-b015-07e5e016033e-kube-api-access-zklfz\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811331 kubelet[1736]: I0213 19:19:13.811145 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-lib-modules\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811331 kubelet[1736]: I0213 19:19:13.811159 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-host-proc-sys-kernel\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811331 kubelet[1736]: I0213 19:19:13.811173 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-cgroup\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811331 kubelet[1736]: I0213 19:19:13.811187 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-xtables-lock\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811331 kubelet[1736]: I0213 19:19:13.811202 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-hostproc\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811331 kubelet[1736]: I0213 19:19:13.811216 1736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-bpf-maps\") pod \"ae29f47a-3dbc-44dc-b015-07e5e016033e\" (UID: \"ae29f47a-3dbc-44dc-b015-07e5e016033e\") " Feb 13 19:19:13.811452 kubelet[1736]: I0213 19:19:13.811285 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.811452 kubelet[1736]: I0213 19:19:13.811322 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.811452 kubelet[1736]: I0213 19:19:13.811336 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cni-path" (OuterVolumeSpecName: "cni-path") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.812149 kubelet[1736]: I0213 19:19:13.811634 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.812149 kubelet[1736]: I0213 19:19:13.811673 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.812149 kubelet[1736]: I0213 19:19:13.811685 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.812149 kubelet[1736]: I0213 19:19:13.811707 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.812149 kubelet[1736]: I0213 19:19:13.811723 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.812712 kubelet[1736]: I0213 19:19:13.812621 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.812712 kubelet[1736]: I0213 19:19:13.812689 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-hostproc" (OuterVolumeSpecName: "hostproc") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:19:13.813954 kubelet[1736]: I0213 19:19:13.813919 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:19:13.816394 kubelet[1736]: I0213 19:19:13.816327 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae29f47a-3dbc-44dc-b015-07e5e016033e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:19:13.817064 systemd[1]: var-lib-kubelet-pods-ae29f47a\x2d3dbc\x2d44dc\x2db015\x2d07e5e016033e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzklfz.mount: Deactivated successfully. Feb 13 19:19:13.817170 systemd[1]: var-lib-kubelet-pods-ae29f47a\x2d3dbc\x2d44dc\x2db015\x2d07e5e016033e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:19:13.818014 kubelet[1736]: I0213 19:19:13.817985 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae29f47a-3dbc-44dc-b015-07e5e016033e-kube-api-access-zklfz" (OuterVolumeSpecName: "kube-api-access-zklfz") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "kube-api-access-zklfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:19:13.818157 kubelet[1736]: I0213 19:19:13.818132 1736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae29f47a-3dbc-44dc-b015-07e5e016033e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ae29f47a-3dbc-44dc-b015-07e5e016033e" (UID: "ae29f47a-3dbc-44dc-b015-07e5e016033e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:19:13.911618 kubelet[1736]: I0213 19:19:13.911500 1736 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae29f47a-3dbc-44dc-b015-07e5e016033e-hubble-tls\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911618 kubelet[1736]: I0213 19:19:13.911531 1736 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-etc-cni-netd\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911618 kubelet[1736]: I0213 19:19:13.911539 1736 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-lib-modules\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911618 kubelet[1736]: I0213 19:19:13.911549 1736 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-host-proc-sys-kernel\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911618 kubelet[1736]: I0213 19:19:13.911561 1736 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-cgroup\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911618 kubelet[1736]: I0213 19:19:13.911569 1736 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-config-path\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911618 kubelet[1736]: I0213 19:19:13.911576 1736 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zklfz\" (UniqueName: \"kubernetes.io/projected/ae29f47a-3dbc-44dc-b015-07e5e016033e-kube-api-access-zklfz\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911618 kubelet[1736]: I0213 19:19:13.911584 1736 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-xtables-lock\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911901 kubelet[1736]: I0213 19:19:13.911591 1736 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-hostproc\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911901 kubelet[1736]: I0213 19:19:13.911598 1736 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-bpf-maps\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911901 kubelet[1736]: I0213 19:19:13.911605 1736 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae29f47a-3dbc-44dc-b015-07e5e016033e-clustermesh-secrets\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911901 kubelet[1736]: I0213 19:19:13.911613 1736 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-host-proc-sys-net\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911901 kubelet[1736]: I0213 19:19:13.911620 1736 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cilium-run\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.911901 kubelet[1736]: I0213 19:19:13.911628 1736 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae29f47a-3dbc-44dc-b015-07e5e016033e-cni-path\") on node \"10.0.0.113\" DevicePath \"\"" Feb 13 19:19:13.994223 kubelet[1736]: E0213 19:19:13.994185 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:14.209915 kubelet[1736]: I0213 19:19:14.209688 1736 scope.go:117] "RemoveContainer" containerID="3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8" Feb 13 19:19:14.210943 containerd[1436]: time="2025-02-13T19:19:14.210906576Z" level=info msg="RemoveContainer for \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\"" Feb 13 19:19:14.215809 systemd[1]: Removed slice kubepods-burstable-podae29f47a_3dbc_44dc_b015_07e5e016033e.slice - libcontainer container kubepods-burstable-podae29f47a_3dbc_44dc_b015_07e5e016033e.slice. Feb 13 19:19:14.215930 systemd[1]: kubepods-burstable-podae29f47a_3dbc_44dc_b015_07e5e016033e.slice: Consumed 6.582s CPU time. Feb 13 19:19:14.219099 containerd[1436]: time="2025-02-13T19:19:14.219056904Z" level=info msg="RemoveContainer for \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\" returns successfully" Feb 13 19:19:14.220245 kubelet[1736]: I0213 19:19:14.219399 1736 scope.go:117] "RemoveContainer" containerID="b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8" Feb 13 19:19:14.220518 containerd[1436]: time="2025-02-13T19:19:14.220481274Z" level=info msg="RemoveContainer for \"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8\"" Feb 13 19:19:14.223075 containerd[1436]: time="2025-02-13T19:19:14.223036795Z" level=info msg="RemoveContainer for \"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8\" returns successfully" Feb 13 19:19:14.223284 kubelet[1736]: I0213 19:19:14.223241 1736 scope.go:117] "RemoveContainer" containerID="3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881" Feb 13 19:19:14.227629 containerd[1436]: time="2025-02-13T19:19:14.227541526Z" level=info msg="RemoveContainer for \"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881\"" Feb 13 19:19:14.230524 containerd[1436]: time="2025-02-13T19:19:14.230468819Z" level=info msg="RemoveContainer for \"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881\" returns successfully" Feb 13 19:19:14.230786 kubelet[1736]: I0213 19:19:14.230630 1736 scope.go:117] "RemoveContainer" containerID="f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563" Feb 13 19:19:14.231572 containerd[1436]: time="2025-02-13T19:19:14.231546455Z" level=info msg="RemoveContainer for \"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563\"" Feb 13 19:19:14.246907 containerd[1436]: time="2025-02-13T19:19:14.246863987Z" level=info msg="RemoveContainer for \"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563\" returns successfully" Feb 13 19:19:14.247197 kubelet[1736]: I0213 19:19:14.247114 1736 scope.go:117] "RemoveContainer" containerID="d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c" Feb 13 19:19:14.248117 containerd[1436]: time="2025-02-13T19:19:14.248063654Z" level=info msg="RemoveContainer for \"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c\"" Feb 13 19:19:14.250261 containerd[1436]: time="2025-02-13T19:19:14.250221847Z" level=info msg="RemoveContainer for \"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c\" returns successfully" Feb 13 19:19:14.250506 kubelet[1736]: I0213 19:19:14.250385 1736 scope.go:117] "RemoveContainer" containerID="3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8" Feb 13 19:19:14.250629 containerd[1436]: time="2025-02-13T19:19:14.250580259Z" level=error msg="ContainerStatus for \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\": not found" Feb 13 19:19:14.250929 kubelet[1736]: E0213 19:19:14.250752 1736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\": not found" containerID="3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8" Feb 13 19:19:14.250929 kubelet[1736]: I0213 19:19:14.250786 1736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8"} err="failed to get container status \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b8f57ab5cf2ea2c6021cffa58a272111a42897344b24b946dc7c78fc3f504a8\": not found" Feb 13 19:19:14.250929 kubelet[1736]: I0213 19:19:14.250859 1736 scope.go:117] "RemoveContainer" containerID="b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8" Feb 13 19:19:14.251047 containerd[1436]: time="2025-02-13T19:19:14.250986507Z" level=error msg="ContainerStatus for \"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8\": not found" Feb 13 19:19:14.251231 kubelet[1736]: E0213 19:19:14.251148 1736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8\": not found" containerID="b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8" Feb 13 19:19:14.251231 kubelet[1736]: I0213 19:19:14.251174 1736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8"} err="failed to get container status \"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0fc6ce00beef414f4da1d328c30ad665b32b7a29b45327105b8b8d02a7e1ee8\": not found" Feb 13 19:19:14.251231 kubelet[1736]: I0213 19:19:14.251188 1736 scope.go:117] "RemoveContainer" containerID="3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881" Feb 13 19:19:14.251402 containerd[1436]: time="2025-02-13T19:19:14.251328761Z" level=error msg="ContainerStatus for \"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881\": not found" Feb 13 19:19:14.251486 kubelet[1736]: E0213 19:19:14.251460 1736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881\": not found" containerID="3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881" Feb 13 19:19:14.251529 kubelet[1736]: I0213 19:19:14.251491 1736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881"} err="failed to get container status \"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cfaf14d7854be08a9f7e60118385f57506d7620d50e614b9f2e9c4b61dc3881\": not found" Feb 13 19:19:14.251529 kubelet[1736]: I0213 19:19:14.251508 1736 scope.go:117] "RemoveContainer" containerID="f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563" Feb 13 19:19:14.251730 containerd[1436]: time="2025-02-13T19:19:14.251640857Z" level=error msg="ContainerStatus for \"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563\": not found" Feb 13 19:19:14.251799 kubelet[1736]: E0213 19:19:14.251770 1736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563\": not found" containerID="f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563" Feb 13 19:19:14.251831 kubelet[1736]: I0213 19:19:14.251796 1736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563"} err="failed to get container status \"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6da33b385d0918e162dc07c0af97876b757b189c801d18fd4077e2610a0d563\": not found" Feb 13 19:19:14.251831 kubelet[1736]: I0213 19:19:14.251818 1736 scope.go:117] "RemoveContainer" containerID="d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c" Feb 13 19:19:14.252407 containerd[1436]: time="2025-02-13T19:19:14.252005908Z" level=error msg="ContainerStatus for \"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c\": not found" Feb 13 19:19:14.252474 kubelet[1736]: E0213 19:19:14.252129 1736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c\": not found" containerID="d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c" Feb 13 19:19:14.252474 kubelet[1736]: I0213 19:19:14.252150 1736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c"} err="failed to get container status \"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8eb1ab5f999b68fc5b94e6ef232cdf9295a61ab69907f31404c2c085711be2c\": not found" Feb 13 19:19:14.572457 systemd[1]: var-lib-kubelet-pods-ae29f47a\x2d3dbc\x2d44dc\x2db015\x2d07e5e016033e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:19:14.995215 kubelet[1736]: E0213 19:19:14.995173 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:15.096486 kubelet[1736]: I0213 19:19:15.096418 1736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae29f47a-3dbc-44dc-b015-07e5e016033e" path="/var/lib/kubelet/pods/ae29f47a-3dbc-44dc-b015-07e5e016033e/volumes" Feb 13 19:19:15.995485 kubelet[1736]: E0213 19:19:15.995440 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:16.111775 kubelet[1736]: E0213 19:19:16.111729 1736 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:19:16.891198 kubelet[1736]: E0213 19:19:16.890978 1736 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae29f47a-3dbc-44dc-b015-07e5e016033e" containerName="mount-bpf-fs" Feb 13 19:19:16.891198 kubelet[1736]: E0213 19:19:16.891009 1736 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae29f47a-3dbc-44dc-b015-07e5e016033e" containerName="clean-cilium-state" Feb 13 19:19:16.891198 kubelet[1736]: E0213 19:19:16.891015 1736 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae29f47a-3dbc-44dc-b015-07e5e016033e" containerName="apply-sysctl-overwrites" Feb 13 19:19:16.891198 kubelet[1736]: E0213 19:19:16.891022 1736 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae29f47a-3dbc-44dc-b015-07e5e016033e" containerName="mount-cgroup" Feb 13 19:19:16.891198 kubelet[1736]: E0213 19:19:16.891027 1736 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae29f47a-3dbc-44dc-b015-07e5e016033e" containerName="cilium-agent" Feb 13 19:19:16.891198 kubelet[1736]: I0213 19:19:16.891047 1736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae29f47a-3dbc-44dc-b015-07e5e016033e" containerName="cilium-agent" Feb 13 19:19:16.896618 systemd[1]: Created slice kubepods-burstable-pod40cc7398_be49_4bc0_a90c_533953b1d3bd.slice - libcontainer container kubepods-burstable-pod40cc7398_be49_4bc0_a90c_533953b1d3bd.slice. Feb 13 19:19:16.916393 systemd[1]: Created slice kubepods-besteffort-pod67e4a7ed_0b66_40db_a070_4403f394cd31.slice - libcontainer container kubepods-besteffort-pod67e4a7ed_0b66_40db_a070_4403f394cd31.slice. Feb 13 19:19:16.928229 kubelet[1736]: I0213 19:19:16.928123 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40cc7398-be49-4bc0-a90c-533953b1d3bd-cilium-run\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928229 kubelet[1736]: I0213 19:19:16.928162 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40cc7398-be49-4bc0-a90c-533953b1d3bd-etc-cni-netd\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928229 kubelet[1736]: I0213 19:19:16.928189 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/40cc7398-be49-4bc0-a90c-533953b1d3bd-cilium-ipsec-secrets\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928229 kubelet[1736]: I0213 19:19:16.928211 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40cc7398-be49-4bc0-a90c-533953b1d3bd-cilium-config-path\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928229 kubelet[1736]: I0213 19:19:16.928229 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40cc7398-be49-4bc0-a90c-533953b1d3bd-host-proc-sys-kernel\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928430 kubelet[1736]: I0213 19:19:16.928244 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67e4a7ed-0b66-40db-a070-4403f394cd31-cilium-config-path\") pod \"cilium-operator-5d85765b45-vkxb2\" (UID: \"67e4a7ed-0b66-40db-a070-4403f394cd31\") " pod="kube-system/cilium-operator-5d85765b45-vkxb2" Feb 13 19:19:16.928430 kubelet[1736]: I0213 19:19:16.928261 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40cc7398-be49-4bc0-a90c-533953b1d3bd-cni-path\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928430 kubelet[1736]: I0213 19:19:16.928276 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40cc7398-be49-4bc0-a90c-533953b1d3bd-lib-modules\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928430 kubelet[1736]: I0213 19:19:16.928291 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw7kr\" (UniqueName: \"kubernetes.io/projected/67e4a7ed-0b66-40db-a070-4403f394cd31-kube-api-access-zw7kr\") pod \"cilium-operator-5d85765b45-vkxb2\" (UID: \"67e4a7ed-0b66-40db-a070-4403f394cd31\") " pod="kube-system/cilium-operator-5d85765b45-vkxb2" Feb 13 19:19:16.928430 kubelet[1736]: I0213 19:19:16.928306 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40cc7398-be49-4bc0-a90c-533953b1d3bd-host-proc-sys-net\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928535 kubelet[1736]: I0213 19:19:16.928322 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40cc7398-be49-4bc0-a90c-533953b1d3bd-hubble-tls\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928535 kubelet[1736]: I0213 19:19:16.928335 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40cc7398-be49-4bc0-a90c-533953b1d3bd-hostproc\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928535 kubelet[1736]: I0213 19:19:16.928350 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40cc7398-be49-4bc0-a90c-533953b1d3bd-cilium-cgroup\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928535 kubelet[1736]: I0213 19:19:16.928365 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40cc7398-be49-4bc0-a90c-533953b1d3bd-xtables-lock\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928535 kubelet[1736]: I0213 19:19:16.928385 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40cc7398-be49-4bc0-a90c-533953b1d3bd-clustermesh-secrets\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928535 kubelet[1736]: I0213 19:19:16.928403 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40cc7398-be49-4bc0-a90c-533953b1d3bd-bpf-maps\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.928676 kubelet[1736]: I0213 19:19:16.928420 1736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hpn9\" (UniqueName: \"kubernetes.io/projected/40cc7398-be49-4bc0-a90c-533953b1d3bd-kube-api-access-5hpn9\") pod \"cilium-9k8dm\" (UID: \"40cc7398-be49-4bc0-a90c-533953b1d3bd\") " pod="kube-system/cilium-9k8dm" Feb 13 19:19:16.995904 kubelet[1736]: E0213 19:19:16.995847 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:17.214328 kubelet[1736]: E0213 19:19:17.213986 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:17.214419 containerd[1436]: time="2025-02-13T19:19:17.214376101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9k8dm,Uid:40cc7398-be49-4bc0-a90c-533953b1d3bd,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:17.219331 kubelet[1736]: E0213 19:19:17.219288 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:17.219771 containerd[1436]: time="2025-02-13T19:19:17.219725552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vkxb2,Uid:67e4a7ed-0b66-40db-a070-4403f394cd31,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:17.236367 containerd[1436]: time="2025-02-13T19:19:17.236208500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:17.236367 containerd[1436]: time="2025-02-13T19:19:17.236290019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:17.237099 containerd[1436]: time="2025-02-13T19:19:17.236940811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:17.237099 containerd[1436]: time="2025-02-13T19:19:17.237059010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:17.256813 containerd[1436]: time="2025-02-13T19:19:17.256508160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:17.256813 containerd[1436]: time="2025-02-13T19:19:17.256608599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:17.256813 containerd[1436]: time="2025-02-13T19:19:17.256639918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:17.257532 containerd[1436]: time="2025-02-13T19:19:17.257403908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:17.257937 systemd[1]: Started cri-containerd-dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95.scope - libcontainer container dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95. Feb 13 19:19:17.276917 systemd[1]: Started cri-containerd-09d0be5176b5d7e0b79b6bb842885d48c5970323f2fc3ff36cfa26c0ff046d86.scope - libcontainer container 09d0be5176b5d7e0b79b6bb842885d48c5970323f2fc3ff36cfa26c0ff046d86. Feb 13 19:19:17.282933 containerd[1436]: time="2025-02-13T19:19:17.282891741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9k8dm,Uid:40cc7398-be49-4bc0-a90c-533953b1d3bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95\"" Feb 13 19:19:17.283767 kubelet[1736]: E0213 19:19:17.283619 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:17.287035 containerd[1436]: time="2025-02-13T19:19:17.286996409Z" level=info msg="CreateContainer within sandbox \"dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:19:17.297770 containerd[1436]: time="2025-02-13T19:19:17.297688831Z" level=info msg="CreateContainer within sandbox \"dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a832e26bb35df2507f1263933bc6d7ac373b498978b733dd6ea57b6246648ae6\"" Feb 13 19:19:17.299351 containerd[1436]: time="2025-02-13T19:19:17.299321490Z" level=info msg="StartContainer for \"a832e26bb35df2507f1263933bc6d7ac373b498978b733dd6ea57b6246648ae6\"" Feb 13 19:19:17.311321 containerd[1436]: time="2025-02-13T19:19:17.311120779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vkxb2,Uid:67e4a7ed-0b66-40db-a070-4403f394cd31,Namespace:kube-system,Attempt:0,} returns sandbox id \"09d0be5176b5d7e0b79b6bb842885d48c5970323f2fc3ff36cfa26c0ff046d86\"" Feb 13 19:19:17.313190 kubelet[1736]: E0213 19:19:17.312738 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:17.314926 containerd[1436]: time="2025-02-13T19:19:17.314893891Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:19:17.329016 systemd[1]: Started cri-containerd-a832e26bb35df2507f1263933bc6d7ac373b498978b733dd6ea57b6246648ae6.scope - libcontainer container a832e26bb35df2507f1263933bc6d7ac373b498978b733dd6ea57b6246648ae6. Feb 13 19:19:17.352036 containerd[1436]: time="2025-02-13T19:19:17.351968695Z" level=info msg="StartContainer for \"a832e26bb35df2507f1263933bc6d7ac373b498978b733dd6ea57b6246648ae6\" returns successfully" Feb 13 19:19:17.410875 systemd[1]: cri-containerd-a832e26bb35df2507f1263933bc6d7ac373b498978b733dd6ea57b6246648ae6.scope: Deactivated successfully. Feb 13 19:19:17.436356 containerd[1436]: time="2025-02-13T19:19:17.436287332Z" level=info msg="shim disconnected" id=a832e26bb35df2507f1263933bc6d7ac373b498978b733dd6ea57b6246648ae6 namespace=k8s.io Feb 13 19:19:17.436356 containerd[1436]: time="2025-02-13T19:19:17.436345092Z" level=warning msg="cleaning up after shim disconnected" id=a832e26bb35df2507f1263933bc6d7ac373b498978b733dd6ea57b6246648ae6 namespace=k8s.io Feb 13 19:19:17.436356 containerd[1436]: time="2025-02-13T19:19:17.436356332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:17.996378 kubelet[1736]: E0213 19:19:17.996323 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:18.217362 kubelet[1736]: E0213 19:19:18.217334 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:18.219124 containerd[1436]: time="2025-02-13T19:19:18.219087043Z" level=info msg="CreateContainer within sandbox \"dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:19:18.240544 containerd[1436]: time="2025-02-13T19:19:18.240489335Z" level=info msg="CreateContainer within sandbox \"dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c0c852ae91387a9a5f255294d3ebbf7d678971ec7ace00f66b6260fec29d1e1\"" Feb 13 19:19:18.241215 containerd[1436]: time="2025-02-13T19:19:18.241175047Z" level=info msg="StartContainer for \"8c0c852ae91387a9a5f255294d3ebbf7d678971ec7ace00f66b6260fec29d1e1\"" Feb 13 19:19:18.269958 systemd[1]: Started cri-containerd-8c0c852ae91387a9a5f255294d3ebbf7d678971ec7ace00f66b6260fec29d1e1.scope - libcontainer container 8c0c852ae91387a9a5f255294d3ebbf7d678971ec7ace00f66b6260fec29d1e1. Feb 13 19:19:18.294770 containerd[1436]: time="2025-02-13T19:19:18.294699139Z" level=info msg="StartContainer for \"8c0c852ae91387a9a5f255294d3ebbf7d678971ec7ace00f66b6260fec29d1e1\" returns successfully" Feb 13 19:19:18.308724 systemd[1]: cri-containerd-8c0c852ae91387a9a5f255294d3ebbf7d678971ec7ace00f66b6260fec29d1e1.scope: Deactivated successfully. Feb 13 19:19:18.333525 containerd[1436]: time="2025-02-13T19:19:18.333442415Z" level=info msg="shim disconnected" id=8c0c852ae91387a9a5f255294d3ebbf7d678971ec7ace00f66b6260fec29d1e1 namespace=k8s.io Feb 13 19:19:18.333525 containerd[1436]: time="2025-02-13T19:19:18.333505375Z" level=warning msg="cleaning up after shim disconnected" id=8c0c852ae91387a9a5f255294d3ebbf7d678971ec7ace00f66b6260fec29d1e1 namespace=k8s.io Feb 13 19:19:18.333525 containerd[1436]: time="2025-02-13T19:19:18.333516734Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:18.651458 containerd[1436]: time="2025-02-13T19:19:18.651352488Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:18.652262 containerd[1436]: time="2025-02-13T19:19:18.652207517Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:19:18.653176 containerd[1436]: time="2025-02-13T19:19:18.653123986Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:18.654524 containerd[1436]: time="2025-02-13T19:19:18.654496729Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.339415681s" Feb 13 19:19:18.654524 containerd[1436]: time="2025-02-13T19:19:18.654527408Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:19:18.656775 containerd[1436]: time="2025-02-13T19:19:18.656739061Z" level=info msg="CreateContainer within sandbox \"09d0be5176b5d7e0b79b6bb842885d48c5970323f2fc3ff36cfa26c0ff046d86\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:19:18.666244 containerd[1436]: time="2025-02-13T19:19:18.666203183Z" level=info msg="CreateContainer within sandbox \"09d0be5176b5d7e0b79b6bb842885d48c5970323f2fc3ff36cfa26c0ff046d86\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"67d44721faec77b8e46912c38376f38dd92bfea55bf390f7769a0093b70c8f36\"" Feb 13 19:19:18.666635 containerd[1436]: time="2025-02-13T19:19:18.666556698Z" level=info msg="StartContainer for \"67d44721faec77b8e46912c38376f38dd92bfea55bf390f7769a0093b70c8f36\"" Feb 13 19:19:18.696945 systemd[1]: Started cri-containerd-67d44721faec77b8e46912c38376f38dd92bfea55bf390f7769a0093b70c8f36.scope - libcontainer container 67d44721faec77b8e46912c38376f38dd92bfea55bf390f7769a0093b70c8f36. Feb 13 19:19:18.718973 containerd[1436]: time="2025-02-13T19:19:18.718872365Z" level=info msg="StartContainer for \"67d44721faec77b8e46912c38376f38dd92bfea55bf390f7769a0093b70c8f36\" returns successfully" Feb 13 19:19:18.997163 kubelet[1736]: E0213 19:19:18.997118 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:19.037900 systemd[1]: run-containerd-runc-k8s.io-8c0c852ae91387a9a5f255294d3ebbf7d678971ec7ace00f66b6260fec29d1e1-runc.dZGqny.mount: Deactivated successfully. Feb 13 19:19:19.037991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c0c852ae91387a9a5f255294d3ebbf7d678971ec7ace00f66b6260fec29d1e1-rootfs.mount: Deactivated successfully. Feb 13 19:19:19.219921 kubelet[1736]: E0213 19:19:19.219888 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:19.222339 kubelet[1736]: E0213 19:19:19.222126 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:19.223848 containerd[1436]: time="2025-02-13T19:19:19.223811780Z" level=info msg="CreateContainer within sandbox \"dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:19:19.228762 kubelet[1736]: I0213 19:19:19.228700 1736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-vkxb2" podStartSLOduration=1.8877541789999999 podStartE2EDuration="3.228683721s" podCreationTimestamp="2025-02-13 19:19:16 +0000 UTC" firstStartedPulling="2025-02-13 19:19:17.314522935 +0000 UTC m=+56.920482554" lastFinishedPulling="2025-02-13 19:19:18.655452437 +0000 UTC m=+58.261412096" observedRunningTime="2025-02-13 19:19:19.227880411 +0000 UTC m=+58.833840110" watchObservedRunningTime="2025-02-13 19:19:19.228683721 +0000 UTC m=+58.834643420" Feb 13 19:19:19.239062 containerd[1436]: time="2025-02-13T19:19:19.239022716Z" level=info msg="CreateContainer within sandbox \"dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd5225bae8e65ff8aa139a9a9c1050531c1f85772829bacc9ab0cf6eed23d570\"" Feb 13 19:19:19.239577 containerd[1436]: time="2025-02-13T19:19:19.239476950Z" level=info msg="StartContainer for \"fd5225bae8e65ff8aa139a9a9c1050531c1f85772829bacc9ab0cf6eed23d570\"" Feb 13 19:19:19.260518 systemd[1]: run-containerd-runc-k8s.io-fd5225bae8e65ff8aa139a9a9c1050531c1f85772829bacc9ab0cf6eed23d570-runc.B87kY2.mount: Deactivated successfully. Feb 13 19:19:19.276004 systemd[1]: Started cri-containerd-fd5225bae8e65ff8aa139a9a9c1050531c1f85772829bacc9ab0cf6eed23d570.scope - libcontainer container fd5225bae8e65ff8aa139a9a9c1050531c1f85772829bacc9ab0cf6eed23d570. Feb 13 19:19:19.299395 systemd[1]: cri-containerd-fd5225bae8e65ff8aa139a9a9c1050531c1f85772829bacc9ab0cf6eed23d570.scope: Deactivated successfully. Feb 13 19:19:19.307879 containerd[1436]: time="2025-02-13T19:19:19.307665723Z" level=info msg="StartContainer for \"fd5225bae8e65ff8aa139a9a9c1050531c1f85772829bacc9ab0cf6eed23d570\" returns successfully" Feb 13 19:19:19.312604 containerd[1436]: time="2025-02-13T19:19:19.306009223Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40cc7398_be49_4bc0_a90c_533953b1d3bd.slice/cri-containerd-fd5225bae8e65ff8aa139a9a9c1050531c1f85772829bacc9ab0cf6eed23d570.scope/memory.events\": no such file or directory" Feb 13 19:19:19.405210 containerd[1436]: time="2025-02-13T19:19:19.405134140Z" level=info msg="shim disconnected" id=fd5225bae8e65ff8aa139a9a9c1050531c1f85772829bacc9ab0cf6eed23d570 namespace=k8s.io Feb 13 19:19:19.405210 containerd[1436]: time="2025-02-13T19:19:19.405205379Z" level=warning msg="cleaning up after shim disconnected" id=fd5225bae8e65ff8aa139a9a9c1050531c1f85772829bacc9ab0cf6eed23d570 namespace=k8s.io Feb 13 19:19:19.405210 containerd[1436]: time="2025-02-13T19:19:19.405215099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:19.997841 kubelet[1736]: E0213 19:19:19.997801 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:20.037083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd5225bae8e65ff8aa139a9a9c1050531c1f85772829bacc9ab0cf6eed23d570-rootfs.mount: Deactivated successfully. Feb 13 19:19:20.230441 kubelet[1736]: E0213 19:19:20.230040 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:20.230441 kubelet[1736]: E0213 19:19:20.230138 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:20.232020 containerd[1436]: time="2025-02-13T19:19:20.231979022Z" level=info msg="CreateContainer within sandbox \"dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:19:20.243434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1729360389.mount: Deactivated successfully. Feb 13 19:19:20.244073 containerd[1436]: time="2025-02-13T19:19:20.244032880Z" level=info msg="CreateContainer within sandbox \"dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f666123cdecd0003517e1138c6765bb18c4da20eefc8e22ad362d6486676060d\"" Feb 13 19:19:20.244571 containerd[1436]: time="2025-02-13T19:19:20.244533194Z" level=info msg="StartContainer for \"f666123cdecd0003517e1138c6765bb18c4da20eefc8e22ad362d6486676060d\"" Feb 13 19:19:20.282941 systemd[1]: Started cri-containerd-f666123cdecd0003517e1138c6765bb18c4da20eefc8e22ad362d6486676060d.scope - libcontainer container f666123cdecd0003517e1138c6765bb18c4da20eefc8e22ad362d6486676060d. Feb 13 19:19:20.302347 systemd[1]: cri-containerd-f666123cdecd0003517e1138c6765bb18c4da20eefc8e22ad362d6486676060d.scope: Deactivated successfully. Feb 13 19:19:20.303719 containerd[1436]: time="2025-02-13T19:19:20.303559658Z" level=info msg="StartContainer for \"f666123cdecd0003517e1138c6765bb18c4da20eefc8e22ad362d6486676060d\" returns successfully" Feb 13 19:19:20.323324 containerd[1436]: time="2025-02-13T19:19:20.323108267Z" level=info msg="shim disconnected" id=f666123cdecd0003517e1138c6765bb18c4da20eefc8e22ad362d6486676060d namespace=k8s.io Feb 13 19:19:20.323324 containerd[1436]: time="2025-02-13T19:19:20.323165466Z" level=warning msg="cleaning up after shim disconnected" id=f666123cdecd0003517e1138c6765bb18c4da20eefc8e22ad362d6486676060d namespace=k8s.io Feb 13 19:19:20.323324 containerd[1436]: time="2025-02-13T19:19:20.323176146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:20.960142 kubelet[1736]: E0213 19:19:20.960103 1736 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:20.981601 containerd[1436]: time="2025-02-13T19:19:20.981540176Z" level=info msg="StopPodSandbox for \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\"" Feb 13 19:19:20.981936 containerd[1436]: time="2025-02-13T19:19:20.981619655Z" level=info msg="TearDown network for sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" successfully" Feb 13 19:19:20.981936 containerd[1436]: time="2025-02-13T19:19:20.981630695Z" level=info msg="StopPodSandbox for \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" returns successfully" Feb 13 19:19:20.982284 containerd[1436]: time="2025-02-13T19:19:20.982132449Z" level=info msg="RemovePodSandbox for \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\"" Feb 13 19:19:20.982284 containerd[1436]: time="2025-02-13T19:19:20.982164329Z" level=info msg="Forcibly stopping sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\"" Feb 13 19:19:20.982284 containerd[1436]: time="2025-02-13T19:19:20.982211848Z" level=info msg="TearDown network for sandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" successfully" Feb 13 19:19:20.990160 containerd[1436]: time="2025-02-13T19:19:20.989976636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:19:20.990160 containerd[1436]: time="2025-02-13T19:19:20.990026476Z" level=info msg="RemovePodSandbox \"0ca543eba4baf065519f26f43d71e15ebd6ac4a5b520637174d9f7dff5299ec8\" returns successfully" Feb 13 19:19:20.998725 kubelet[1736]: E0213 19:19:20.998694 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:21.037189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f666123cdecd0003517e1138c6765bb18c4da20eefc8e22ad362d6486676060d-rootfs.mount: Deactivated successfully. Feb 13 19:19:21.112766 kubelet[1736]: E0213 19:19:21.112717 1736 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:19:21.234380 kubelet[1736]: E0213 19:19:21.234277 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:21.239095 containerd[1436]: time="2025-02-13T19:19:21.238857976Z" level=info msg="CreateContainer within sandbox \"dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:19:21.252305 containerd[1436]: time="2025-02-13T19:19:21.252259622Z" level=info msg="CreateContainer within sandbox \"dc7d6203bab1cfffe6ad9e333f7aaa606ca508b86eb5d9521018c7a5848e8f95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b556275918f7bd7385dd367d3c14ca637c5ed9901b4b82c049684748d1fab02f\"" Feb 13 19:19:21.253365 containerd[1436]: time="2025-02-13T19:19:21.253167171Z" level=info msg="StartContainer for \"b556275918f7bd7385dd367d3c14ca637c5ed9901b4b82c049684748d1fab02f\"" Feb 13 19:19:21.284926 systemd[1]: Started cri-containerd-b556275918f7bd7385dd367d3c14ca637c5ed9901b4b82c049684748d1fab02f.scope - libcontainer container b556275918f7bd7385dd367d3c14ca637c5ed9901b4b82c049684748d1fab02f. Feb 13 19:19:21.308125 containerd[1436]: time="2025-02-13T19:19:21.308016702Z" level=info msg="StartContainer for \"b556275918f7bd7385dd367d3c14ca637c5ed9901b4b82c049684748d1fab02f\" returns successfully" Feb 13 19:19:21.573827 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:19:21.999835 kubelet[1736]: E0213 19:19:21.999791 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:22.239063 kubelet[1736]: E0213 19:19:22.239035 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:22.254921 kubelet[1736]: I0213 19:19:22.254787 1736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9k8dm" podStartSLOduration=6.254769793 podStartE2EDuration="6.254769793s" podCreationTimestamp="2025-02-13 19:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:22.254315158 +0000 UTC m=+61.860274777" watchObservedRunningTime="2025-02-13 19:19:22.254769793 +0000 UTC m=+61.860729412" Feb 13 19:19:22.515096 kubelet[1736]: I0213 19:19:22.514950 1736 setters.go:600] "Node became not ready" node="10.0.0.113" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:19:22Z","lastTransitionTime":"2025-02-13T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:19:23.000150 kubelet[1736]: E0213 19:19:23.000092 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:23.241405 kubelet[1736]: E0213 19:19:23.241048 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:24.000752 kubelet[1736]: E0213 19:19:24.000697 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:24.449618 systemd-networkd[1372]: lxc_health: Link UP Feb 13 19:19:24.462498 systemd-networkd[1372]: lxc_health: Gained carrier Feb 13 19:19:25.001826 kubelet[1736]: E0213 19:19:25.001776 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:25.216853 kubelet[1736]: E0213 19:19:25.216815 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:25.247553 kubelet[1736]: E0213 19:19:25.247506 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:25.722962 systemd-networkd[1372]: lxc_health: Gained IPv6LL Feb 13 19:19:26.002868 kubelet[1736]: E0213 19:19:26.002816 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:26.249940 kubelet[1736]: E0213 19:19:26.249897 1736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:27.003850 kubelet[1736]: E0213 19:19:27.003800 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:28.004281 kubelet[1736]: E0213 19:19:28.004232 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:29.004933 kubelet[1736]: E0213 19:19:29.004885 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:30.005675 kubelet[1736]: E0213 19:19:30.005621 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:31.006488 kubelet[1736]: E0213 19:19:31.006448 1736 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"