Feb 13 19:02:48.952436 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:02:48.952458 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:02:48.952468 kernel: KASLR enabled Feb 13 19:02:48.952474 kernel: efi: EFI v2.7 by EDK II Feb 13 19:02:48.952479 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 19:02:48.952485 kernel: random: crng init done Feb 13 19:02:48.952492 kernel: secureboot: Secure boot disabled Feb 13 19:02:48.952499 kernel: ACPI: Early table checksum verification disabled Feb 13 19:02:48.952504 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:02:48.952512 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:02:48.952518 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:48.952524 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:48.952530 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:48.952537 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:48.952544 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:48.952552 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:48.952558 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:48.952573 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:48.952579 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:02:48.952585 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:02:48.952592 kernel: NUMA: Failed to initialise from firmware Feb 13 19:02:48.952598 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:02:48.952604 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] Feb 13 19:02:48.952610 kernel: Zone ranges: Feb 13 19:02:48.952618 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:02:48.952626 kernel: DMA32 empty Feb 13 19:02:48.952632 kernel: Normal empty Feb 13 19:02:48.952638 kernel: Movable zone start for each node Feb 13 19:02:48.952644 kernel: Early memory node ranges Feb 13 19:02:48.952650 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 19:02:48.952656 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 19:02:48.952663 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 19:02:48.952669 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:02:48.952675 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:02:48.952681 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:02:48.952688 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:02:48.952694 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:02:48.952701 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:02:48.952708 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:02:48.952714 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:02:48.952723 kernel: psci: probing for conduit method from ACPI. Feb 13 19:02:48.952730 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:02:48.952737 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:02:48.952745 kernel: psci: Trusted OS migration not required Feb 13 19:02:48.952751 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:02:48.952758 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:02:48.952764 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:02:48.952771 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:02:48.952778 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:02:48.952785 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:02:48.952792 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:02:48.952798 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:02:48.952805 kernel: CPU features: detected: Spectre-v4 Feb 13 19:02:48.952813 kernel: CPU features: detected: Spectre-BHB Feb 13 19:02:48.952820 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:02:48.952826 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:02:48.952833 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:02:48.952840 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:02:48.952846 kernel: alternatives: applying boot alternatives Feb 13 19:02:48.952854 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:02:48.952861 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:02:48.952868 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:02:48.952875 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:02:48.952882 kernel: Fallback order for Node 0: 0 Feb 13 19:02:48.952890 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:02:48.952896 kernel: Policy zone: DMA Feb 13 19:02:48.952903 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:02:48.952910 kernel: software IO TLB: area num 4. Feb 13 19:02:48.952917 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:02:48.952924 kernel: Memory: 2387548K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184740K reserved, 0K cma-reserved) Feb 13 19:02:48.952932 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:02:48.952943 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:02:48.952951 kernel: rcu: RCU event tracing is enabled. Feb 13 19:02:48.952958 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:02:48.952965 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:02:48.952972 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:02:48.952980 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:02:48.952987 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:02:48.952994 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:02:48.953001 kernel: GICv3: 256 SPIs implemented Feb 13 19:02:48.953007 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:02:48.953014 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:02:48.953021 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:02:48.953027 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:02:48.953034 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:02:48.953041 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:02:48.953048 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:02:48.953056 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:02:48.953127 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:02:48.953136 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:02:48.953144 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:02:48.953150 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:02:48.953157 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:02:48.953164 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:02:48.953171 kernel: arm-pv: using stolen time PV Feb 13 19:02:48.953178 kernel: Console: colour dummy device 80x25 Feb 13 19:02:48.953185 kernel: ACPI: Core revision 20230628 Feb 13 19:02:48.953192 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:02:48.953202 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:02:48.953209 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:02:48.953216 kernel: landlock: Up and running. Feb 13 19:02:48.953222 kernel: SELinux: Initializing. Feb 13 19:02:48.953229 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:02:48.953236 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:02:48.953243 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:02:48.953250 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:02:48.953259 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:02:48.953267 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:02:48.953274 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:02:48.953281 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:02:48.953287 kernel: Remapping and enabling EFI services. Feb 13 19:02:48.953294 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:02:48.953301 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:02:48.953308 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:02:48.953315 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:02:48.953321 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:02:48.953330 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:02:48.953338 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:02:48.953349 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:02:48.953358 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:02:48.953365 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:02:48.953373 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:02:48.953380 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:02:48.953389 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:02:48.953399 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:02:48.953410 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:02:48.953417 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:02:48.953424 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:02:48.953431 kernel: SMP: Total of 4 processors activated. Feb 13 19:02:48.953438 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:02:48.953445 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:02:48.953453 kernel: CPU features: detected: Common not Private translations Feb 13 19:02:48.953460 kernel: CPU features: detected: CRC32 instructions Feb 13 19:02:48.953468 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:02:48.953475 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:02:48.953483 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:02:48.953490 kernel: CPU features: detected: Privileged Access Never Feb 13 19:02:48.953497 kernel: CPU features: detected: RAS Extension Support Feb 13 19:02:48.953504 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:02:48.953511 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:02:48.953518 kernel: alternatives: applying system-wide alternatives Feb 13 19:02:48.953526 kernel: devtmpfs: initialized Feb 13 19:02:48.953533 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:02:48.953542 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:02:48.953549 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:02:48.953556 kernel: SMBIOS 3.0.0 present. Feb 13 19:02:48.953568 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:02:48.953575 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:02:48.953582 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:02:48.953590 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:02:48.953597 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:02:48.953606 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:02:48.953613 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 Feb 13 19:02:48.953621 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:02:48.953628 kernel: cpuidle: using governor menu Feb 13 19:02:48.953635 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:02:48.953643 kernel: ASID allocator initialised with 32768 entries Feb 13 19:02:48.953650 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:02:48.953657 kernel: Serial: AMBA PL011 UART driver Feb 13 19:02:48.953664 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:02:48.953671 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:02:48.953680 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:02:48.953687 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:02:48.953695 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:02:48.953702 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:02:48.953709 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:02:48.953716 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:02:48.953723 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:02:48.953730 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:02:48.953738 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:02:48.953746 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:02:48.953753 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:02:48.953760 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:02:48.953767 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:02:48.953775 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:02:48.953782 kernel: ACPI: Interpreter enabled Feb 13 19:02:48.953789 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:02:48.953796 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:02:48.953803 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:02:48.953812 kernel: printk: console [ttyAMA0] enabled Feb 13 19:02:48.953820 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:02:48.953964 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:02:48.954038 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:02:48.954130 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:02:48.954199 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:02:48.954267 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:02:48.954279 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:02:48.954286 kernel: PCI host bridge to bus 0000:00 Feb 13 19:02:48.954361 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:02:48.954423 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:02:48.954483 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:02:48.954576 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:02:48.954662 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:02:48.954746 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:02:48.954816 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:02:48.954920 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:02:48.954997 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:02:48.955093 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:02:48.955169 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:02:48.955246 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:02:48.955315 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:02:48.955378 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:02:48.955442 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:02:48.955452 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:02:48.955459 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:02:48.955467 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:02:48.955474 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:02:48.955483 kernel: iommu: Default domain type: Translated Feb 13 19:02:48.955491 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:02:48.955498 kernel: efivars: Registered efivars operations Feb 13 19:02:48.955505 kernel: vgaarb: loaded Feb 13 19:02:48.955512 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:02:48.955519 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:02:48.955527 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:02:48.955534 kernel: pnp: PnP ACPI init Feb 13 19:02:48.955638 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:02:48.955653 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:02:48.955662 kernel: NET: Registered PF_INET protocol family Feb 13 19:02:48.955669 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:02:48.955677 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:02:48.955686 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:02:48.955696 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:02:48.955703 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:02:48.955710 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:02:48.955718 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:02:48.955727 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:02:48.955734 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:02:48.955742 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:02:48.955749 kernel: kvm [1]: HYP mode not available Feb 13 19:02:48.955757 kernel: Initialise system trusted keyrings Feb 13 19:02:48.955764 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:02:48.955771 kernel: Key type asymmetric registered Feb 13 19:02:48.955779 kernel: Asymmetric key parser 'x509' registered Feb 13 19:02:48.955786 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:02:48.955795 kernel: io scheduler mq-deadline registered Feb 13 19:02:48.955803 kernel: io scheduler kyber registered Feb 13 19:02:48.955810 kernel: io scheduler bfq registered Feb 13 19:02:48.955818 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:02:48.955825 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:02:48.955832 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:02:48.955907 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:02:48.955918 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:02:48.955925 kernel: thunder_xcv, ver 1.0 Feb 13 19:02:48.955935 kernel: thunder_bgx, ver 1.0 Feb 13 19:02:48.955942 kernel: nicpf, ver 1.0 Feb 13 19:02:48.955949 kernel: nicvf, ver 1.0 Feb 13 19:02:48.956037 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:02:48.956129 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:02:48 UTC (1739473368) Feb 13 19:02:48.956141 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:02:48.956148 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:02:48.956156 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:02:48.956166 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:02:48.956173 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:02:48.956180 kernel: Segment Routing with IPv6 Feb 13 19:02:48.956187 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:02:48.956195 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:02:48.956202 kernel: Key type dns_resolver registered Feb 13 19:02:48.956209 kernel: registered taskstats version 1 Feb 13 19:02:48.956217 kernel: Loading compiled-in X.509 certificates Feb 13 19:02:48.956224 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:02:48.956233 kernel: Key type .fscrypt registered Feb 13 19:02:48.956240 kernel: Key type fscrypt-provisioning registered Feb 13 19:02:48.956248 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:02:48.956255 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:02:48.956263 kernel: ima: No architecture policies found Feb 13 19:02:48.956270 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:02:48.956279 kernel: clk: Disabling unused clocks Feb 13 19:02:48.956288 kernel: Freeing unused kernel memory: 38336K Feb 13 19:02:48.956296 kernel: Run /init as init process Feb 13 19:02:48.956305 kernel: with arguments: Feb 13 19:02:48.956312 kernel: /init Feb 13 19:02:48.956319 kernel: with environment: Feb 13 19:02:48.956326 kernel: HOME=/ Feb 13 19:02:48.956333 kernel: TERM=linux Feb 13 19:02:48.956340 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:02:48.956349 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:02:48.956360 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:02:48.956369 systemd[1]: Detected virtualization kvm. Feb 13 19:02:48.956377 systemd[1]: Detected architecture arm64. Feb 13 19:02:48.956384 systemd[1]: Running in initrd. Feb 13 19:02:48.956392 systemd[1]: No hostname configured, using default hostname. Feb 13 19:02:48.956400 systemd[1]: Hostname set to . Feb 13 19:02:48.956408 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:02:48.956415 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:02:48.956424 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:48.956433 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:48.956441 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:02:48.956449 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:02:48.956457 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:02:48.956466 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:02:48.956475 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:02:48.956485 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:02:48.956493 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:48.956501 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:48.956509 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:02:48.956517 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:02:48.956524 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:02:48.956532 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:02:48.956540 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:02:48.956548 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:02:48.956558 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:02:48.956573 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:02:48.956581 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:48.956589 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:48.956597 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:48.956605 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:02:48.956613 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:02:48.956621 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:02:48.956631 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:02:48.956639 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:02:48.956647 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:02:48.956654 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:02:48.956663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:48.956671 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:02:48.956678 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:48.956688 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:02:48.956696 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:02:48.956704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:48.956712 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:48.956746 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 19:02:48.956772 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:02:48.956783 systemd-journald[237]: Journal started Feb 13 19:02:48.956803 systemd-journald[237]: Runtime Journal (/run/log/journal/49b2e9e2e4624fb5bd63014497b5544c) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:02:48.946834 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 19:02:48.959483 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:02:48.961858 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:02:48.961898 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:02:48.964089 kernel: Bridge firewalling registered Feb 13 19:02:48.964723 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 19:02:48.966720 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:02:48.968359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:48.971002 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:48.976139 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:48.978923 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:48.980518 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:48.983558 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:02:48.990252 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:48.992780 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:02:48.999818 dracut-cmdline[277]: dracut-dracut-053 Feb 13 19:02:49.002308 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:02:49.056363 systemd-resolved[283]: Positive Trust Anchors: Feb 13 19:02:49.056381 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:02:49.056413 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:02:49.077595 systemd-resolved[283]: Defaulting to hostname 'linux'. Feb 13 19:02:49.079430 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:02:49.080395 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:49.113094 kernel: SCSI subsystem initialized Feb 13 19:02:49.118094 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:02:49.125115 kernel: iscsi: registered transport (tcp) Feb 13 19:02:49.138301 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:02:49.138328 kernel: QLogic iSCSI HBA Driver Feb 13 19:02:49.182267 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:02:49.192275 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:02:49.209873 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:02:49.209935 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:02:49.209946 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:02:49.259113 kernel: raid6: neonx8 gen() 15764 MB/s Feb 13 19:02:49.276108 kernel: raid6: neonx4 gen() 15780 MB/s Feb 13 19:02:49.293090 kernel: raid6: neonx2 gen() 13135 MB/s Feb 13 19:02:49.310093 kernel: raid6: neonx1 gen() 10498 MB/s Feb 13 19:02:49.327089 kernel: raid6: int64x8 gen() 6788 MB/s Feb 13 19:02:49.344089 kernel: raid6: int64x4 gen() 7349 MB/s Feb 13 19:02:49.361091 kernel: raid6: int64x2 gen() 6090 MB/s Feb 13 19:02:49.378095 kernel: raid6: int64x1 gen() 5030 MB/s Feb 13 19:02:49.378128 kernel: raid6: using algorithm neonx4 gen() 15780 MB/s Feb 13 19:02:49.395097 kernel: raid6: .... xor() 12290 MB/s, rmw enabled Feb 13 19:02:49.395124 kernel: raid6: using neon recovery algorithm Feb 13 19:02:49.400221 kernel: xor: measuring software checksum speed Feb 13 19:02:49.400253 kernel: 8regs : 21670 MB/sec Feb 13 19:02:49.401227 kernel: 32regs : 21699 MB/sec Feb 13 19:02:49.401243 kernel: arm64_neon : 27841 MB/sec Feb 13 19:02:49.401252 kernel: xor: using function: arm64_neon (27841 MB/sec) Feb 13 19:02:49.451097 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:02:49.464095 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:02:49.473272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:49.488520 systemd-udevd[464]: Using default interface naming scheme 'v255'. Feb 13 19:02:49.493184 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:49.501245 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:02:49.512862 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Feb 13 19:02:49.540643 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:02:49.549259 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:02:49.591732 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:49.599283 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:02:49.612199 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:02:49.613496 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:02:49.615230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:49.617132 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:02:49.625538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:02:49.635177 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:02:49.658893 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:02:49.661969 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:02:49.662095 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:02:49.662108 kernel: GPT:9289727 != 19775487 Feb 13 19:02:49.662117 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:02:49.662125 kernel: GPT:9289727 != 19775487 Feb 13 19:02:49.662134 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:02:49.662142 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:02:49.660925 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:02:49.661046 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:49.665039 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:49.666517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:02:49.666691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:49.669794 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:49.676150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:49.684101 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (517) Feb 13 19:02:49.688105 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (514) Feb 13 19:02:49.693092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:49.701835 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:02:49.713535 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:02:49.725361 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:02:49.731422 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:02:49.732517 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:02:49.746246 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:02:49.749574 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:49.754720 disk-uuid[556]: Primary Header is updated. Feb 13 19:02:49.754720 disk-uuid[556]: Secondary Entries is updated. Feb 13 19:02:49.754720 disk-uuid[556]: Secondary Header is updated. Feb 13 19:02:49.757668 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:02:49.781124 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:50.777796 disk-uuid[557]: The operation has completed successfully. Feb 13 19:02:50.778752 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:02:50.801576 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:02:50.802496 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:02:50.838253 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:02:50.841226 sh[576]: Success Feb 13 19:02:50.853291 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:02:50.885972 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:02:50.899548 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:02:50.901033 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:02:50.912806 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:02:50.912856 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:50.912866 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:02:50.912877 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:02:50.913364 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:02:50.917857 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:02:50.918768 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:02:50.929260 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:02:50.930741 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:02:50.940684 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:50.940730 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:50.940742 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:02:50.944757 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:02:50.951986 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:02:50.953154 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:50.960545 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:02:50.967273 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:02:51.028675 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:02:51.039242 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:02:51.067608 ignition[674]: Ignition 2.20.0 Feb 13 19:02:51.067622 ignition[674]: Stage: fetch-offline Feb 13 19:02:51.067660 ignition[674]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:51.067670 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:51.067850 ignition[674]: parsed url from cmdline: "" Feb 13 19:02:51.067854 ignition[674]: no config URL provided Feb 13 19:02:51.067858 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:02:51.067866 ignition[674]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:02:51.067891 ignition[674]: op(1): [started] loading QEMU firmware config module Feb 13 19:02:51.067896 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:02:51.075852 systemd-networkd[766]: lo: Link UP Feb 13 19:02:51.075856 systemd-networkd[766]: lo: Gained carrier Feb 13 19:02:51.077364 ignition[674]: op(1): [finished] loading QEMU firmware config module Feb 13 19:02:51.077385 ignition[674]: QEMU firmware config was not found. Ignoring... Feb 13 19:02:51.078126 systemd-networkd[766]: Enumeration completed Feb 13 19:02:51.078287 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:02:51.079415 systemd[1]: Reached target network.target - Network. Feb 13 19:02:51.081275 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:51.081279 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:02:51.081984 systemd-networkd[766]: eth0: Link UP Feb 13 19:02:51.081987 systemd-networkd[766]: eth0: Gained carrier Feb 13 19:02:51.081994 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:51.092347 ignition[674]: parsing config with SHA512: 62429df1a756b84271ced3fc3d63d9360aaaa5b41a3d209833753cbff4f2973f357f95515aa5a229d68683f354c8014b8c47532cd84f4f11a0c6ea435a88aeb7 Feb 13 19:02:51.095862 unknown[674]: fetched base config from "system" Feb 13 19:02:51.095873 unknown[674]: fetched user config from "qemu" Feb 13 19:02:51.096156 ignition[674]: fetch-offline: fetch-offline passed Feb 13 19:02:51.097951 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:02:51.096227 ignition[674]: Ignition finished successfully Feb 13 19:02:51.099454 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:02:51.101153 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:02:51.106232 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:02:51.118609 ignition[772]: Ignition 2.20.0 Feb 13 19:02:51.118620 ignition[772]: Stage: kargs Feb 13 19:02:51.118790 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:51.118800 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:51.119477 ignition[772]: kargs: kargs passed Feb 13 19:02:51.119520 ignition[772]: Ignition finished successfully Feb 13 19:02:51.123316 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:02:51.136237 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:02:51.145711 ignition[782]: Ignition 2.20.0 Feb 13 19:02:51.145720 ignition[782]: Stage: disks Feb 13 19:02:51.145885 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:51.145900 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:51.146591 ignition[782]: disks: disks passed Feb 13 19:02:51.148381 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:02:51.146635 ignition[782]: Ignition finished successfully Feb 13 19:02:51.149769 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:02:51.151228 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:02:51.152766 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:02:51.154385 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:02:51.155985 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:02:51.170238 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:02:51.180851 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:02:51.184430 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:02:51.186592 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:02:51.233959 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:02:51.235173 kernel: EXT4-fs (vda9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:02:51.235076 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:02:51.250159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:02:51.251771 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:02:51.252839 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:02:51.252883 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:02:51.258881 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Feb 13 19:02:51.253020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:02:51.261955 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:51.261973 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:51.261991 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:02:51.260368 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:02:51.264093 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:02:51.282248 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:02:51.284006 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:02:51.322988 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:02:51.326894 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:02:51.330789 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:02:51.334407 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:02:51.405364 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:02:51.419181 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:02:51.420627 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:02:51.425089 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:51.442558 ignition[916]: INFO : Ignition 2.20.0 Feb 13 19:02:51.442558 ignition[916]: INFO : Stage: mount Feb 13 19:02:51.443814 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:51.443814 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:51.443814 ignition[916]: INFO : mount: mount passed Feb 13 19:02:51.443814 ignition[916]: INFO : Ignition finished successfully Feb 13 19:02:51.445691 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:02:51.456210 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:02:51.457130 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:02:51.933385 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:02:51.943375 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:02:51.950132 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (930) Feb 13 19:02:51.952516 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:51.952595 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:51.952616 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:02:51.955098 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:02:51.956541 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:02:51.973126 ignition[947]: INFO : Ignition 2.20.0 Feb 13 19:02:51.973126 ignition[947]: INFO : Stage: files Feb 13 19:02:51.974617 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:51.974617 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:51.974617 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:02:51.977969 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:02:51.977969 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:02:51.981283 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:02:51.982504 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:02:51.982504 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:02:51.981895 unknown[947]: wrote ssh authorized keys file for user: core Feb 13 19:02:51.986089 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:02:51.986089 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:02:51.986089 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:02:51.986089 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:02:51.986089 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:02:51.986089 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:02:51.986089 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:02:51.986089 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:02:52.329470 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:02:52.588672 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:02:52.588672 ignition[947]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:02:52.591801 ignition[947]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:02:52.591801 ignition[947]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:02:52.591801 ignition[947]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:02:52.591801 ignition[947]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:02:52.612252 ignition[947]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:02:52.615804 ignition[947]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:02:52.617712 ignition[947]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:02:52.617712 ignition[947]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:02:52.622337 ignition[947]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:02:52.622337 ignition[947]: INFO : files: files passed Feb 13 19:02:52.622337 ignition[947]: INFO : Ignition finished successfully Feb 13 19:02:52.619998 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:02:52.636250 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:02:52.638056 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:02:52.639976 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:02:52.640095 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:02:52.645756 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:02:52.648408 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:52.648408 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:52.651232 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:52.651198 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:02:52.652381 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:02:52.664275 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:02:52.685003 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:02:52.685191 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:02:52.687095 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:02:52.688378 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:02:52.689934 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:02:52.695221 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:02:52.706877 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:02:52.720276 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:02:52.728603 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:52.729646 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:52.731203 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:02:52.732599 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:02:52.732740 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:02:52.734750 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:02:52.736388 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:02:52.737694 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:02:52.739019 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:02:52.742277 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:02:52.743344 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:02:52.744799 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:02:52.753478 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:02:52.754886 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:02:52.756185 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:02:52.761335 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:02:52.761480 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:02:52.763291 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:52.768019 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:52.769463 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:02:52.770335 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:52.772187 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:02:52.772332 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:02:52.774702 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:02:52.774835 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:02:52.776582 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:02:52.776724 systemd-networkd[766]: eth0: Gained IPv6LL Feb 13 19:02:52.778110 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:02:52.782152 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:52.783266 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:02:52.785088 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:02:52.786307 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:02:52.786399 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:02:52.787622 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:02:52.787698 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:02:52.788843 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:02:52.788956 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:02:52.790277 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:02:52.790379 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:02:52.801307 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:02:52.802025 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:02:52.802178 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:52.808421 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:02:52.809854 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:02:52.810002 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:52.811093 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:02:52.811194 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:02:52.815722 ignition[1003]: INFO : Ignition 2.20.0 Feb 13 19:02:52.815722 ignition[1003]: INFO : Stage: umount Feb 13 19:02:52.815722 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:52.815722 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:02:52.818651 ignition[1003]: INFO : umount: umount passed Feb 13 19:02:52.818651 ignition[1003]: INFO : Ignition finished successfully Feb 13 19:02:52.816711 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:02:52.821357 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:02:52.821457 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:02:52.824212 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:02:52.824315 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:02:52.827301 systemd[1]: Stopped target network.target - Network. Feb 13 19:02:52.828123 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:02:52.828192 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:02:52.829634 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:02:52.829682 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:02:52.830986 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:02:52.831030 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:02:52.832296 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:02:52.832339 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:02:52.833827 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:02:52.835133 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:02:52.838588 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:02:52.838716 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:02:52.841933 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:02:52.842243 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:02:52.842280 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:52.844693 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:02:52.844883 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:02:52.844973 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:02:52.847790 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:02:52.847846 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:52.859190 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:02:52.859898 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:02:52.859970 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:02:52.861564 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:02:52.861610 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:52.863849 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:02:52.863894 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:52.865393 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:52.874639 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:02:52.874756 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:02:52.877750 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:02:52.877885 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:52.879607 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:02:52.879650 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:52.880971 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:02:52.881002 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:52.882357 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:02:52.882406 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:02:52.884629 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:02:52.884675 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:02:52.886751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:02:52.886795 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:52.899282 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:02:52.900092 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:02:52.900164 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:52.902495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:02:52.902541 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:52.905277 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:02:52.905358 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:02:52.932817 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:02:52.932887 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:02:52.932918 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:02:52.932952 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:02:52.944975 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:02:52.945102 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:02:52.946604 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:02:52.947774 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:02:52.947830 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:02:52.959260 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:02:52.966205 systemd[1]: Switching root. Feb 13 19:02:52.995227 systemd-journald[237]: Journal stopped Feb 13 19:02:53.775202 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 19:02:53.775257 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:02:53.775269 kernel: SELinux: policy capability open_perms=1 Feb 13 19:02:53.775279 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:02:53.775289 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:02:53.775298 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:02:53.775308 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:02:53.775318 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:02:53.775335 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:02:53.775345 kernel: audit: type=1403 audit(1739473373.133:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:02:53.775355 systemd[1]: Successfully loaded SELinux policy in 34.080ms. Feb 13 19:02:53.775372 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.032ms. Feb 13 19:02:53.775386 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:02:53.775399 systemd[1]: Detected virtualization kvm. Feb 13 19:02:53.775410 systemd[1]: Detected architecture arm64. Feb 13 19:02:53.775424 systemd[1]: Detected first boot. Feb 13 19:02:53.775435 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:02:53.775445 zram_generator::config[1051]: No configuration found. Feb 13 19:02:53.775456 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:02:53.775465 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:02:53.775476 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:02:53.775488 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:02:53.775498 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:02:53.775508 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:02:53.775519 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:02:53.775529 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:02:53.775539 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:02:53.775567 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:02:53.775580 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:02:53.775591 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:02:53.775604 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:02:53.775614 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:02:53.775625 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:53.775637 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:53.775647 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:02:53.775657 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:02:53.775668 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:02:53.775678 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:02:53.775688 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:02:53.775700 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:53.775711 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:02:53.775721 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:02:53.775732 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:02:53.775742 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:02:53.775752 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:53.775763 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:02:53.775774 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:02:53.775790 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:02:53.775800 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:02:53.775811 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:02:53.775821 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:02:53.775832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:53.775842 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:53.775852 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:53.775864 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:02:53.775874 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:02:53.775885 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:02:53.775895 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:02:53.775905 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:02:53.775916 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:02:53.775926 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:02:53.775937 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:02:53.775947 systemd[1]: Reached target machines.target - Containers. Feb 13 19:02:53.775958 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:02:53.775970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:53.775982 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:02:53.775993 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:02:53.776004 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:53.776014 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:02:53.776025 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:53.776035 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:02:53.776045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:53.776056 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:02:53.776079 kernel: fuse: init (API version 7.39) Feb 13 19:02:53.776090 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:02:53.776101 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:02:53.776111 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:02:53.776121 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:02:53.776132 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:53.776143 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:02:53.776153 kernel: ACPI: bus type drm_connector registered Feb 13 19:02:53.776164 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:02:53.776175 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:02:53.776185 kernel: loop: module loaded Feb 13 19:02:53.776194 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:02:53.776205 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:02:53.776215 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:02:53.776227 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:02:53.776237 systemd[1]: Stopped verity-setup.service. Feb 13 19:02:53.776269 systemd-journald[1123]: Collecting audit messages is disabled. Feb 13 19:02:53.776290 systemd-journald[1123]: Journal started Feb 13 19:02:53.776311 systemd-journald[1123]: Runtime Journal (/run/log/journal/49b2e9e2e4624fb5bd63014497b5544c) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:02:53.571750 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:02:53.583963 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:02:53.584364 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:02:53.778128 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:02:53.779512 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:02:53.780536 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:02:53.781622 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:02:53.782558 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:02:53.783480 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:02:53.784421 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:02:53.787097 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:02:53.788295 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:53.789517 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:02:53.789718 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:02:53.790975 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:53.791166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:53.792263 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:02:53.792495 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:02:53.793606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:53.793781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:53.794969 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:02:53.795338 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:02:53.796469 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:53.796636 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:53.797844 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:53.799037 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:02:53.800603 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:02:53.802431 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:02:53.815427 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:02:53.834194 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:02:53.836212 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:02:53.837041 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:02:53.837130 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:02:53.839043 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:02:53.841118 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:02:53.843053 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:02:53.843951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:53.845135 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:02:53.848788 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:02:53.849831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:02:53.851016 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:02:53.851978 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:02:53.856303 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:53.858013 systemd-journald[1123]: Time spent on flushing to /var/log/journal/49b2e9e2e4624fb5bd63014497b5544c is 25.246ms for 852 entries. Feb 13 19:02:53.858013 systemd-journald[1123]: System Journal (/var/log/journal/49b2e9e2e4624fb5bd63014497b5544c) is 8M, max 195.6M, 187.6M free. Feb 13 19:02:53.892813 systemd-journald[1123]: Received client request to flush runtime journal. Feb 13 19:02:53.892869 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 19:02:53.864630 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:02:53.869294 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:02:53.875076 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:53.876393 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:02:53.877463 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:02:53.879046 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:02:53.892416 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:02:53.896316 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:02:53.897769 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:02:53.900979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:53.903951 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:02:53.910128 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:02:53.917562 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:02:53.924146 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:02:53.925709 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:02:53.928034 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:02:53.937099 kernel: loop1: detected capacity change from 0 to 113512 Feb 13 19:02:53.947772 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:02:53.961336 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Feb 13 19:02:53.961698 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Feb 13 19:02:53.967989 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:53.970151 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:02:54.011117 kernel: loop3: detected capacity change from 0 to 123192 Feb 13 19:02:54.017081 kernel: loop4: detected capacity change from 0 to 113512 Feb 13 19:02:54.022115 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 19:02:54.026996 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:02:54.027429 (sd-merge)[1193]: Merged extensions into '/usr'. Feb 13 19:02:54.030719 systemd[1]: Reload requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:02:54.030737 systemd[1]: Reloading... Feb 13 19:02:54.090093 zram_generator::config[1220]: No configuration found. Feb 13 19:02:54.157249 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:02:54.201992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:54.257939 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:02:54.258293 systemd[1]: Reloading finished in 227 ms. Feb 13 19:02:54.275373 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:02:54.276578 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:02:54.292524 systemd[1]: Starting ensure-sysext.service... Feb 13 19:02:54.294224 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:02:54.306676 systemd[1]: Reload requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:02:54.306691 systemd[1]: Reloading... Feb 13 19:02:54.313161 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:02:54.313739 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:02:54.314559 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:02:54.314894 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Feb 13 19:02:54.315005 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Feb 13 19:02:54.317631 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:02:54.317731 systemd-tmpfiles[1256]: Skipping /boot Feb 13 19:02:54.326793 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:02:54.326937 systemd-tmpfiles[1256]: Skipping /boot Feb 13 19:02:54.367167 zram_generator::config[1285]: No configuration found. Feb 13 19:02:54.452028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:54.509738 systemd[1]: Reloading finished in 202 ms. Feb 13 19:02:54.520697 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:02:54.535140 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:54.542604 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:02:54.544899 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:02:54.547152 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:02:54.550495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:02:54.555359 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:54.561112 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:02:54.564629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:54.565792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:54.570506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:54.577344 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:54.578288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:54.578398 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:54.581961 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:02:54.583884 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:02:54.588018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:54.588210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:54.590269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:54.590440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:54.601375 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:54.602946 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:54.604972 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:02:54.610209 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:54.612906 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Feb 13 19:02:54.621389 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:54.625172 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:02:54.630077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:54.636417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:54.637310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:54.637422 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:54.641346 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:02:54.643407 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:02:54.646208 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:54.647764 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:02:54.653658 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:54.653831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:54.656312 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:02:54.656533 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:02:54.657866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:54.658010 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:54.659568 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:02:54.665348 augenrules[1384]: No rules Feb 13 19:02:54.666683 systemd[1]: Finished ensure-sysext.service. Feb 13 19:02:54.667694 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:02:54.667899 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:02:54.675626 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:54.675802 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:54.690303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:02:54.691054 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:02:54.691164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:02:54.692971 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:02:54.693925 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:02:54.714476 systemd-resolved[1324]: Positive Trust Anchors: Feb 13 19:02:54.716292 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:02:54.716326 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:02:54.722612 systemd-resolved[1324]: Defaulting to hostname 'linux'. Feb 13 19:02:54.725142 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1363) Feb 13 19:02:54.727917 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:02:54.729059 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:02:54.732318 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:54.769543 systemd-networkd[1398]: lo: Link UP Feb 13 19:02:54.769562 systemd-networkd[1398]: lo: Gained carrier Feb 13 19:02:54.770482 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:02:54.771923 systemd-networkd[1398]: Enumeration completed Feb 13 19:02:54.779508 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:54.779521 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:02:54.783200 systemd-networkd[1398]: eth0: Link UP Feb 13 19:02:54.783211 systemd-networkd[1398]: eth0: Gained carrier Feb 13 19:02:54.783225 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:54.784248 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:02:54.785489 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:02:54.786614 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:02:54.787898 systemd[1]: Reached target network.target - Network. Feb 13 19:02:54.788677 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:02:54.790971 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:02:54.794322 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:02:54.803424 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:02:54.804278 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:02:54.805194 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Feb 13 19:02:54.810466 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:02:54.810532 systemd-timesyncd[1400]: Initial clock synchronization to Thu 2025-02-13 19:02:54.514152 UTC. Feb 13 19:02:54.813369 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:02:54.852302 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:54.858654 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:02:54.861574 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:02:54.883030 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:02:54.904109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:54.921592 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:02:54.922754 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:54.923640 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:02:54.924500 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:02:54.925406 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:02:54.926455 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:02:54.927369 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:02:54.928272 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:02:54.929161 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:02:54.929191 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:02:54.929832 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:02:54.931285 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:02:54.933365 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:02:54.936419 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:02:54.937526 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:02:54.938464 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:02:54.943928 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:02:54.945397 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:02:54.947285 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:02:54.948592 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:02:54.949497 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:02:54.950214 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:02:54.950912 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:02:54.950945 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:02:54.951860 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:02:54.953637 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:02:54.954735 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:02:54.957223 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:02:54.959281 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:02:54.962806 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:02:54.964019 jq[1431]: false Feb 13 19:02:54.965323 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:02:54.968588 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:02:54.970647 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:02:54.976810 extend-filesystems[1432]: Found loop3 Feb 13 19:02:54.976810 extend-filesystems[1432]: Found loop4 Feb 13 19:02:54.976810 extend-filesystems[1432]: Found loop5 Feb 13 19:02:54.976810 extend-filesystems[1432]: Found vda Feb 13 19:02:54.976810 extend-filesystems[1432]: Found vda1 Feb 13 19:02:54.976810 extend-filesystems[1432]: Found vda2 Feb 13 19:02:54.976810 extend-filesystems[1432]: Found vda3 Feb 13 19:02:54.981465 dbus-daemon[1430]: [system] SELinux support is enabled Feb 13 19:02:54.976958 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:02:54.986966 extend-filesystems[1432]: Found usr Feb 13 19:02:54.986966 extend-filesystems[1432]: Found vda4 Feb 13 19:02:54.986966 extend-filesystems[1432]: Found vda6 Feb 13 19:02:54.986966 extend-filesystems[1432]: Found vda7 Feb 13 19:02:54.986966 extend-filesystems[1432]: Found vda9 Feb 13 19:02:54.986966 extend-filesystems[1432]: Checking size of /dev/vda9 Feb 13 19:02:54.980031 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:02:54.980499 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:02:54.982277 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:02:54.987721 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:02:54.989318 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:02:54.992085 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:02:54.994769 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:02:54.994947 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:02:54.995230 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:02:54.995390 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:02:54.996477 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:02:54.996667 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:02:54.999710 jq[1447]: true Feb 13 19:02:55.000812 extend-filesystems[1432]: Resized partition /dev/vda9 Feb 13 19:02:55.004912 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:02:55.004968 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:02:55.007643 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:02:55.007660 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:02:55.014472 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:02:55.016885 jq[1457]: true Feb 13 19:02:55.026204 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:02:55.026268 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1364) Feb 13 19:02:55.030212 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:02:55.031779 update_engine[1446]: I20250213 19:02:55.031589 1446 main.cc:92] Flatcar Update Engine starting Feb 13 19:02:55.035672 update_engine[1446]: I20250213 19:02:55.035622 1446 update_check_scheduler.cc:74] Next update check in 7m22s Feb 13 19:02:55.035901 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:02:55.038441 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:02:55.060085 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:02:55.072910 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:02:55.072910 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:02:55.072910 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:02:55.077329 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Feb 13 19:02:55.075918 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:02:55.077237 systemd-logind[1438]: New seat seat0. Feb 13 19:02:55.087497 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:02:55.087704 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:02:55.088964 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:02:55.107546 bash[1480]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:02:55.109793 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:02:55.111562 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:02:55.116957 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:02:55.250112 containerd[1463]: time="2025-02-13T19:02:55.249697109Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:02:55.276251 containerd[1463]: time="2025-02-13T19:02:55.275980083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:55.278767 containerd[1463]: time="2025-02-13T19:02:55.277426850Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:55.278767 containerd[1463]: time="2025-02-13T19:02:55.277460554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:02:55.278767 containerd[1463]: time="2025-02-13T19:02:55.277476309Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:02:55.278767 containerd[1463]: time="2025-02-13T19:02:55.277755347Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:02:55.278767 containerd[1463]: time="2025-02-13T19:02:55.277775801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:55.278767 containerd[1463]: time="2025-02-13T19:02:55.277828612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:55.278767 containerd[1463]: time="2025-02-13T19:02:55.277840707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:55.278767 containerd[1463]: time="2025-02-13T19:02:55.278055879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:55.278767 containerd[1463]: time="2025-02-13T19:02:55.278084845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:55.278767 containerd[1463]: time="2025-02-13T19:02:55.278097750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:55.278767 containerd[1463]: time="2025-02-13T19:02:55.278106686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:55.279020 containerd[1463]: time="2025-02-13T19:02:55.278188579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:55.279020 containerd[1463]: time="2025-02-13T19:02:55.278379330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:55.279020 containerd[1463]: time="2025-02-13T19:02:55.278507870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:55.279020 containerd[1463]: time="2025-02-13T19:02:55.278520659Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:02:55.279020 containerd[1463]: time="2025-02-13T19:02:55.278595696Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:02:55.279020 containerd[1463]: time="2025-02-13T19:02:55.278635371Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:02:55.281863 containerd[1463]: time="2025-02-13T19:02:55.281839529Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:02:55.281974 containerd[1463]: time="2025-02-13T19:02:55.281960827Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:02:55.282112 containerd[1463]: time="2025-02-13T19:02:55.282097534Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:02:55.282173 containerd[1463]: time="2025-02-13T19:02:55.282160399Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:02:55.282231 containerd[1463]: time="2025-02-13T19:02:55.282220182Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:02:55.282415 containerd[1463]: time="2025-02-13T19:02:55.282394561Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:02:55.282742 containerd[1463]: time="2025-02-13T19:02:55.282724676Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:02:55.282915 containerd[1463]: time="2025-02-13T19:02:55.282896243Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:02:55.282990 containerd[1463]: time="2025-02-13T19:02:55.282973745Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:02:55.283042 containerd[1463]: time="2025-02-13T19:02:55.283030986Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:02:55.283118 containerd[1463]: time="2025-02-13T19:02:55.283105483Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:02:55.283167 containerd[1463]: time="2025-02-13T19:02:55.283156060Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:02:55.283229 containerd[1463]: time="2025-02-13T19:02:55.283216074Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:02:55.283279 containerd[1463]: time="2025-02-13T19:02:55.283268923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:02:55.283336 containerd[1463]: time="2025-02-13T19:02:55.283325932Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:02:55.283384 containerd[1463]: time="2025-02-13T19:02:55.283373697Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:02:55.283436 containerd[1463]: time="2025-02-13T19:02:55.283424774Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:02:55.283490 containerd[1463]: time="2025-02-13T19:02:55.283478355Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:02:55.283546 containerd[1463]: time="2025-02-13T19:02:55.283535480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.283593 containerd[1463]: time="2025-02-13T19:02:55.283583283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.283640 containerd[1463]: time="2025-02-13T19:02:55.283630354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.283714 containerd[1463]: time="2025-02-13T19:02:55.283701077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.283767 containerd[1463]: time="2025-02-13T19:02:55.283755005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.283816 containerd[1463]: time="2025-02-13T19:02:55.283805697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.283862 containerd[1463]: time="2025-02-13T19:02:55.283852190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.283908 containerd[1463]: time="2025-02-13T19:02:55.283898067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.283957 containerd[1463]: time="2025-02-13T19:02:55.283946911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.284019 containerd[1463]: time="2025-02-13T19:02:55.284004459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.284098 containerd[1463]: time="2025-02-13T19:02:55.284084773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.284149 containerd[1463]: time="2025-02-13T19:02:55.284138084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.284199 containerd[1463]: time="2025-02-13T19:02:55.284188623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.284266 containerd[1463]: time="2025-02-13T19:02:55.284253914Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:02:55.284329 containerd[1463]: time="2025-02-13T19:02:55.284317548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.284382 containerd[1463]: time="2025-02-13T19:02:55.284369897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.284445 containerd[1463]: time="2025-02-13T19:02:55.284432376Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:02:55.284679 containerd[1463]: time="2025-02-13T19:02:55.284664843Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:02:55.284753 containerd[1463]: time="2025-02-13T19:02:55.284738185Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:02:55.284859 containerd[1463]: time="2025-02-13T19:02:55.284845193Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:02:55.284913 containerd[1463]: time="2025-02-13T19:02:55.284901779Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:02:55.284958 containerd[1463]: time="2025-02-13T19:02:55.284947733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.285017 containerd[1463]: time="2025-02-13T19:02:55.285005667Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:02:55.285082 containerd[1463]: time="2025-02-13T19:02:55.285071728Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:02:55.285133 containerd[1463]: time="2025-02-13T19:02:55.285120225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:02:55.285547 containerd[1463]: time="2025-02-13T19:02:55.285495986Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:02:55.285718 containerd[1463]: time="2025-02-13T19:02:55.285700873Z" level=info msg="Connect containerd service" Feb 13 19:02:55.285800 containerd[1463]: time="2025-02-13T19:02:55.285787966Z" level=info msg="using legacy CRI server" Feb 13 19:02:55.285862 containerd[1463]: time="2025-02-13T19:02:55.285850253Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:02:55.286212 containerd[1463]: time="2025-02-13T19:02:55.286186570Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:02:55.288028 containerd[1463]: time="2025-02-13T19:02:55.287993728Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:02:55.288416 containerd[1463]: time="2025-02-13T19:02:55.288365175Z" level=info msg="Start subscribing containerd event" Feb 13 19:02:55.288457 containerd[1463]: time="2025-02-13T19:02:55.288432238Z" level=info msg="Start recovering state" Feb 13 19:02:55.288571 containerd[1463]: time="2025-02-13T19:02:55.288555347Z" level=info msg="Start event monitor" Feb 13 19:02:55.288659 containerd[1463]: time="2025-02-13T19:02:55.288571295Z" level=info msg="Start snapshots syncer" Feb 13 19:02:55.288659 containerd[1463]: time="2025-02-13T19:02:55.288581117Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:02:55.288659 containerd[1463]: time="2025-02-13T19:02:55.288588166Z" level=info msg="Start streaming server" Feb 13 19:02:55.289032 containerd[1463]: time="2025-02-13T19:02:55.289007378Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:02:55.289236 containerd[1463]: time="2025-02-13T19:02:55.289212611Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:02:55.289482 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:02:55.290655 containerd[1463]: time="2025-02-13T19:02:55.290636343Z" level=info msg="containerd successfully booted in 0.042652s" Feb 13 19:02:55.512934 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:02:55.532385 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:02:55.540368 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:02:55.546083 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:02:55.548171 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:02:55.550711 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:02:55.564981 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:02:55.575421 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:02:55.577431 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:02:55.578429 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:02:56.165187 systemd-networkd[1398]: eth0: Gained IPv6LL Feb 13 19:02:56.171130 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:02:56.172598 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:02:56.186337 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:02:56.188534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:56.190344 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:02:56.203538 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:02:56.203754 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:02:56.205348 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:02:56.214980 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:02:56.673386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:56.674612 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:02:56.677496 (kubelet)[1536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:56.679156 systemd[1]: Startup finished in 603ms (kernel) + 4.425s (initrd) + 3.580s (userspace) = 8.609s. Feb 13 19:02:57.161693 kubelet[1536]: E0213 19:02:57.161582 1536 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:57.163897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:57.164045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:57.166192 systemd[1]: kubelet.service: Consumed 826ms CPU time, 239.9M memory peak. Feb 13 19:03:01.963009 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:03:01.964549 systemd[1]: Started sshd@0-10.0.0.44:22-10.0.0.1:44146.service - OpenSSH per-connection server daemon (10.0.0.1:44146). Feb 13 19:03:02.023559 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 44146 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:02.025297 sshd-session[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:02.036462 systemd-logind[1438]: New session 1 of user core. Feb 13 19:03:02.038004 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:03:02.051312 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:03:02.063767 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:03:02.066080 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:03:02.071665 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:03:02.073687 systemd-logind[1438]: New session c1 of user core. Feb 13 19:03:02.175801 systemd[1554]: Queued start job for default target default.target. Feb 13 19:03:02.187011 systemd[1554]: Created slice app.slice - User Application Slice. Feb 13 19:03:02.187043 systemd[1554]: Reached target paths.target - Paths. Feb 13 19:03:02.187104 systemd[1554]: Reached target timers.target - Timers. Feb 13 19:03:02.188319 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:03:02.200196 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:03:02.200317 systemd[1554]: Reached target sockets.target - Sockets. Feb 13 19:03:02.200358 systemd[1554]: Reached target basic.target - Basic System. Feb 13 19:03:02.200387 systemd[1554]: Reached target default.target - Main User Target. Feb 13 19:03:02.200423 systemd[1554]: Startup finished in 121ms. Feb 13 19:03:02.200532 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:03:02.202091 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:03:02.285560 systemd[1]: Started sshd@1-10.0.0.44:22-10.0.0.1:44162.service - OpenSSH per-connection server daemon (10.0.0.1:44162). Feb 13 19:03:02.325165 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 44162 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:02.326451 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:02.330328 systemd-logind[1438]: New session 2 of user core. Feb 13 19:03:02.349324 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:03:02.399840 sshd[1567]: Connection closed by 10.0.0.1 port 44162 Feb 13 19:03:02.400222 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:02.411235 systemd[1]: sshd@1-10.0.0.44:22-10.0.0.1:44162.service: Deactivated successfully. Feb 13 19:03:02.412746 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:03:02.413553 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:03:02.415251 systemd[1]: Started sshd@2-10.0.0.44:22-10.0.0.1:44174.service - OpenSSH per-connection server daemon (10.0.0.1:44174). Feb 13 19:03:02.416098 systemd-logind[1438]: Removed session 2. Feb 13 19:03:02.459912 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 44174 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:02.461027 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:02.466122 systemd-logind[1438]: New session 3 of user core. Feb 13 19:03:02.475263 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:03:02.524380 sshd[1575]: Connection closed by 10.0.0.1 port 44174 Feb 13 19:03:02.524721 sshd-session[1572]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:02.543196 systemd[1]: sshd@2-10.0.0.44:22-10.0.0.1:44174.service: Deactivated successfully. Feb 13 19:03:02.546132 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:03:02.547303 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:03:02.548339 systemd[1]: Started sshd@3-10.0.0.44:22-10.0.0.1:44180.service - OpenSSH per-connection server daemon (10.0.0.1:44180). Feb 13 19:03:02.548979 systemd-logind[1438]: Removed session 3. Feb 13 19:03:02.593246 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 44180 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:02.594517 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:02.599465 systemd-logind[1438]: New session 4 of user core. Feb 13 19:03:02.611214 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:03:02.661456 sshd[1583]: Connection closed by 10.0.0.1 port 44180 Feb 13 19:03:02.661841 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:02.672481 systemd[1]: sshd@3-10.0.0.44:22-10.0.0.1:44180.service: Deactivated successfully. Feb 13 19:03:02.673876 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:03:02.674606 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:03:02.676915 systemd[1]: Started sshd@4-10.0.0.44:22-10.0.0.1:36196.service - OpenSSH per-connection server daemon (10.0.0.1:36196). Feb 13 19:03:02.677714 systemd-logind[1438]: Removed session 4. Feb 13 19:03:02.722359 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 36196 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:02.723576 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:02.727567 systemd-logind[1438]: New session 5 of user core. Feb 13 19:03:02.739204 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:03:02.798363 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:03:02.798647 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:02.810307 sudo[1592]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:02.813195 sshd[1591]: Connection closed by 10.0.0.1 port 36196 Feb 13 19:03:02.813114 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:02.827251 systemd[1]: sshd@4-10.0.0.44:22-10.0.0.1:36196.service: Deactivated successfully. Feb 13 19:03:02.828700 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:03:02.829407 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:03:02.840363 systemd[1]: Started sshd@5-10.0.0.44:22-10.0.0.1:36206.service - OpenSSH per-connection server daemon (10.0.0.1:36206). Feb 13 19:03:02.841300 systemd-logind[1438]: Removed session 5. Feb 13 19:03:02.883246 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 36206 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:02.884580 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:02.889356 systemd-logind[1438]: New session 6 of user core. Feb 13 19:03:02.907316 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:03:02.957798 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:03:02.958083 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:02.961838 sudo[1602]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:02.966448 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:03:02.966709 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:02.983377 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:03:03.006051 augenrules[1624]: No rules Feb 13 19:03:03.006714 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:03:03.008096 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:03:03.009253 sudo[1601]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:03.011107 sshd[1600]: Connection closed by 10.0.0.1 port 36206 Feb 13 19:03:03.011008 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:03.028121 systemd[1]: sshd@5-10.0.0.44:22-10.0.0.1:36206.service: Deactivated successfully. Feb 13 19:03:03.029635 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:03:03.030886 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:03:03.032099 systemd[1]: Started sshd@6-10.0.0.44:22-10.0.0.1:36218.service - OpenSSH per-connection server daemon (10.0.0.1:36218). Feb 13 19:03:03.033832 systemd-logind[1438]: Removed session 6. Feb 13 19:03:03.075639 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 36218 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:03:03.077803 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:03.082125 systemd-logind[1438]: New session 7 of user core. Feb 13 19:03:03.095269 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:03:03.145717 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:03:03.146133 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:03.169411 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:03:03.184406 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:03:03.184641 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:03:03.700758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:03.700908 systemd[1]: kubelet.service: Consumed 826ms CPU time, 239.9M memory peak. Feb 13 19:03:03.708361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:03.722839 systemd[1]: Reload requested from client PID 1685 ('systemctl') (unit session-7.scope)... Feb 13 19:03:03.722856 systemd[1]: Reloading... Feb 13 19:03:03.802089 zram_generator::config[1728]: No configuration found. Feb 13 19:03:03.973627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:04.051733 systemd[1]: Reloading finished in 328 ms. Feb 13 19:03:04.091861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:04.095105 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:04.095486 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:03:04.095771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:04.095808 systemd[1]: kubelet.service: Consumed 81ms CPU time, 82.4M memory peak. Feb 13 19:03:04.097269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:04.189461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:04.193171 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:03:04.238709 kubelet[1775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:03:04.238709 kubelet[1775]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:03:04.238709 kubelet[1775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:03:04.239609 kubelet[1775]: I0213 19:03:04.239553 1775 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:03:05.147929 kubelet[1775]: I0213 19:03:05.147889 1775 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:03:05.147929 kubelet[1775]: I0213 19:03:05.147920 1775 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:03:05.148155 kubelet[1775]: I0213 19:03:05.148141 1775 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:03:05.177442 kubelet[1775]: I0213 19:03:05.177341 1775 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:03:05.190514 kubelet[1775]: I0213 19:03:05.190487 1775 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:03:05.191207 kubelet[1775]: I0213 19:03:05.191056 1775 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:03:05.191282 kubelet[1775]: I0213 19:03:05.191109 1775 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.44","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:03:05.191574 kubelet[1775]: I0213 19:03:05.191531 1775 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:03:05.191574 kubelet[1775]: I0213 19:03:05.191560 1775 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:03:05.191775 kubelet[1775]: I0213 19:03:05.191750 1775 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:03:05.192850 kubelet[1775]: I0213 19:03:05.192821 1775 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:03:05.192850 kubelet[1775]: I0213 19:03:05.192848 1775 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:03:05.193524 kubelet[1775]: I0213 19:03:05.193303 1775 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:03:05.193618 kubelet[1775]: E0213 19:03:05.193549 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:05.193644 kubelet[1775]: I0213 19:03:05.193617 1775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:03:05.193644 kubelet[1775]: E0213 19:03:05.193619 1775 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:05.197115 kubelet[1775]: I0213 19:03:05.197090 1775 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:03:05.197606 kubelet[1775]: I0213 19:03:05.197589 1775 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:03:05.197710 kubelet[1775]: W0213 19:03:05.197698 1775 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:03:05.199024 kubelet[1775]: I0213 19:03:05.198997 1775 server.go:1264] "Started kubelet" Feb 13 19:03:05.200093 kubelet[1775]: I0213 19:03:05.199129 1775 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:03:05.200093 kubelet[1775]: I0213 19:03:05.199297 1775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:03:05.200093 kubelet[1775]: I0213 19:03:05.199544 1775 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:03:05.200202 kubelet[1775]: I0213 19:03:05.200183 1775 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:03:05.206097 kubelet[1775]: I0213 19:03:05.205782 1775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:03:05.206852 kubelet[1775]: W0213 19:03:05.206818 1775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:03:05.206917 kubelet[1775]: E0213 19:03:05.206859 1775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:03:05.207105 kubelet[1775]: W0213 19:03:05.207002 1775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.44" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:03:05.207105 kubelet[1775]: E0213 19:03:05.207026 1775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.44" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:03:05.211603 kubelet[1775]: E0213 19:03:05.207176 1775 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.44.1823d9d3e507f8d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.44,UID:10.0.0.44,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.44,},FirstTimestamp:2025-02-13 19:03:05.198975187 +0000 UTC m=+1.002818579,LastTimestamp:2025-02-13 19:03:05.198975187 +0000 UTC m=+1.002818579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.44,}" Feb 13 19:03:05.211603 kubelet[1775]: E0213 19:03:05.211572 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:05.211723 kubelet[1775]: I0213 19:03:05.211659 1775 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:03:05.212371 kubelet[1775]: I0213 19:03:05.211752 1775 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:03:05.213621 kubelet[1775]: I0213 19:03:05.213252 1775 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:03:05.214706 kubelet[1775]: I0213 19:03:05.214674 1775 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:03:05.215766 kubelet[1775]: E0213 19:03:05.215745 1775 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:03:05.216494 kubelet[1775]: I0213 19:03:05.216448 1775 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:03:05.216687 kubelet[1775]: I0213 19:03:05.216646 1775 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:03:05.225512 kubelet[1775]: E0213 19:03:05.225466 1775 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.44\" not found" node="10.0.0.44" Feb 13 19:03:05.226548 kubelet[1775]: I0213 19:03:05.226529 1775 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:03:05.226548 kubelet[1775]: I0213 19:03:05.226543 1775 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:03:05.226661 kubelet[1775]: I0213 19:03:05.226562 1775 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:03:05.312925 kubelet[1775]: I0213 19:03:05.312856 1775 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.44" Feb 13 19:03:05.361963 kubelet[1775]: I0213 19:03:05.361895 1775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:03:05.362994 kubelet[1775]: I0213 19:03:05.362916 1775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:03:05.363199 kubelet[1775]: I0213 19:03:05.363171 1775 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:03:05.363199 kubelet[1775]: I0213 19:03:05.363203 1775 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:03:05.363270 kubelet[1775]: E0213 19:03:05.363257 1775 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:03:05.375223 kubelet[1775]: I0213 19:03:05.375170 1775 policy_none.go:49] "None policy: Start" Feb 13 19:03:05.376426 kubelet[1775]: I0213 19:03:05.376389 1775 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:03:05.376426 kubelet[1775]: I0213 19:03:05.376425 1775 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:03:05.377231 kubelet[1775]: I0213 19:03:05.377194 1775 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.44" Feb 13 19:03:05.400439 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:03:05.420930 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:03:05.423870 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:03:05.431002 kubelet[1775]: I0213 19:03:05.430879 1775 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:03:05.431139 kubelet[1775]: I0213 19:03:05.431101 1775 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:03:05.431285 kubelet[1775]: I0213 19:03:05.431211 1775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:03:05.432628 kubelet[1775]: E0213 19:03:05.432601 1775 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.44\" not found" Feb 13 19:03:05.443736 kubelet[1775]: E0213 19:03:05.443710 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:05.544374 kubelet[1775]: E0213 19:03:05.544303 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:05.644859 kubelet[1775]: E0213 19:03:05.644724 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:05.710097 sudo[1636]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:05.711618 sshd[1635]: Connection closed by 10.0.0.1 port 36218 Feb 13 19:03:05.712185 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:05.715594 systemd[1]: sshd@6-10.0.0.44:22-10.0.0.1:36218.service: Deactivated successfully. Feb 13 19:03:05.717379 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:03:05.718122 systemd[1]: session-7.scope: Consumed 441ms CPU time, 113.3M memory peak. Feb 13 19:03:05.719012 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:03:05.720131 systemd-logind[1438]: Removed session 7. Feb 13 19:03:05.745782 kubelet[1775]: E0213 19:03:05.745738 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:05.846138 kubelet[1775]: E0213 19:03:05.846095 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:05.946531 kubelet[1775]: E0213 19:03:05.946497 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:06.046958 kubelet[1775]: E0213 19:03:06.046880 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:06.147351 kubelet[1775]: E0213 19:03:06.147314 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:06.150494 kubelet[1775]: I0213 19:03:06.150462 1775 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:03:06.150720 kubelet[1775]: W0213 19:03:06.150671 1775 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:03:06.150720 kubelet[1775]: W0213 19:03:06.150696 1775 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:03:06.194379 kubelet[1775]: E0213 19:03:06.194345 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:06.248349 kubelet[1775]: E0213 19:03:06.248307 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:06.348850 kubelet[1775]: E0213 19:03:06.348757 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:06.449446 kubelet[1775]: E0213 19:03:06.449396 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:06.549930 kubelet[1775]: E0213 19:03:06.549872 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.44\" not found" Feb 13 19:03:06.651104 kubelet[1775]: I0213 19:03:06.650903 1775 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:03:06.651266 containerd[1463]: time="2025-02-13T19:03:06.651226466Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:03:06.651909 kubelet[1775]: I0213 19:03:06.651708 1775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:03:07.194216 kubelet[1775]: I0213 19:03:07.194152 1775 apiserver.go:52] "Watching apiserver" Feb 13 19:03:07.196364 kubelet[1775]: E0213 19:03:07.194398 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:07.207275 kubelet[1775]: I0213 19:03:07.207082 1775 topology_manager.go:215] "Topology Admit Handler" podUID="6c9a004d-6a92-4214-9c2a-ea634fe6f451" podNamespace="kube-system" podName="cilium-ggpwv" Feb 13 19:03:07.207275 kubelet[1775]: I0213 19:03:07.207259 1775 topology_manager.go:215] "Topology Admit Handler" podUID="cb98fba7-7eb7-49fc-946f-6f78d55db2b0" podNamespace="kube-system" podName="kube-proxy-dqq44" Feb 13 19:03:07.212660 kubelet[1775]: I0213 19:03:07.212624 1775 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:03:07.214714 systemd[1]: Created slice kubepods-burstable-pod6c9a004d_6a92_4214_9c2a_ea634fe6f451.slice - libcontainer container kubepods-burstable-pod6c9a004d_6a92_4214_9c2a_ea634fe6f451.slice. Feb 13 19:03:07.223526 kubelet[1775]: I0213 19:03:07.223476 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-xtables-lock\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223526 kubelet[1775]: I0213 19:03:07.223524 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c9a004d-6a92-4214-9c2a-ea634fe6f451-clustermesh-secrets\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223690 kubelet[1775]: I0213 19:03:07.223550 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-host-proc-sys-kernel\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223690 kubelet[1775]: I0213 19:03:07.223565 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c9a004d-6a92-4214-9c2a-ea634fe6f451-hubble-tls\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223690 kubelet[1775]: I0213 19:03:07.223583 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb98fba7-7eb7-49fc-946f-6f78d55db2b0-lib-modules\") pod \"kube-proxy-dqq44\" (UID: \"cb98fba7-7eb7-49fc-946f-6f78d55db2b0\") " pod="kube-system/kube-proxy-dqq44" Feb 13 19:03:07.223690 kubelet[1775]: I0213 19:03:07.223599 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-hostproc\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223690 kubelet[1775]: I0213 19:03:07.223613 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-lib-modules\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223690 kubelet[1775]: I0213 19:03:07.223627 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-cgroup\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223804 kubelet[1775]: I0213 19:03:07.223643 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-etc-cni-netd\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223804 kubelet[1775]: I0213 19:03:07.223658 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-config-path\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223804 kubelet[1775]: I0213 19:03:07.223674 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-host-proc-sys-net\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223804 kubelet[1775]: I0213 19:03:07.223689 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb98fba7-7eb7-49fc-946f-6f78d55db2b0-xtables-lock\") pod \"kube-proxy-dqq44\" (UID: \"cb98fba7-7eb7-49fc-946f-6f78d55db2b0\") " pod="kube-system/kube-proxy-dqq44" Feb 13 19:03:07.223804 kubelet[1775]: I0213 19:03:07.223706 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-run\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223804 kubelet[1775]: I0213 19:03:07.223721 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-bpf-maps\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223911 kubelet[1775]: I0213 19:03:07.223734 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cni-path\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223911 kubelet[1775]: I0213 19:03:07.223749 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r22qz\" (UniqueName: \"kubernetes.io/projected/6c9a004d-6a92-4214-9c2a-ea634fe6f451-kube-api-access-r22qz\") pod \"cilium-ggpwv\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " pod="kube-system/cilium-ggpwv" Feb 13 19:03:07.223911 kubelet[1775]: I0213 19:03:07.223768 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb98fba7-7eb7-49fc-946f-6f78d55db2b0-kube-proxy\") pod \"kube-proxy-dqq44\" (UID: \"cb98fba7-7eb7-49fc-946f-6f78d55db2b0\") " pod="kube-system/kube-proxy-dqq44" Feb 13 19:03:07.223911 kubelet[1775]: I0213 19:03:07.223813 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zx5t\" (UniqueName: \"kubernetes.io/projected/cb98fba7-7eb7-49fc-946f-6f78d55db2b0-kube-api-access-4zx5t\") pod \"kube-proxy-dqq44\" (UID: \"cb98fba7-7eb7-49fc-946f-6f78d55db2b0\") " pod="kube-system/kube-proxy-dqq44" Feb 13 19:03:07.245708 systemd[1]: Created slice kubepods-besteffort-podcb98fba7_7eb7_49fc_946f_6f78d55db2b0.slice - libcontainer container kubepods-besteffort-podcb98fba7_7eb7_49fc_946f_6f78d55db2b0.slice. Feb 13 19:03:07.541490 containerd[1463]: time="2025-02-13T19:03:07.541372701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ggpwv,Uid:6c9a004d-6a92-4214-9c2a-ea634fe6f451,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:07.563976 containerd[1463]: time="2025-02-13T19:03:07.563811999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqq44,Uid:cb98fba7-7eb7-49fc-946f-6f78d55db2b0,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:08.056254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714675391.mount: Deactivated successfully. Feb 13 19:03:08.064338 containerd[1463]: time="2025-02-13T19:03:08.064287952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:08.065235 containerd[1463]: time="2025-02-13T19:03:08.065195554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:03:08.066040 containerd[1463]: time="2025-02-13T19:03:08.066010803Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:08.066831 containerd[1463]: time="2025-02-13T19:03:08.066805347Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:08.067948 containerd[1463]: time="2025-02-13T19:03:08.067912519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:03:08.069993 containerd[1463]: time="2025-02-13T19:03:08.069940328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:03:08.071368 containerd[1463]: time="2025-02-13T19:03:08.071121812Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.656674ms" Feb 13 19:03:08.074217 containerd[1463]: time="2025-02-13T19:03:08.074183156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.277572ms" Feb 13 19:03:08.195504 kubelet[1775]: E0213 19:03:08.195455 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:08.201625 containerd[1463]: time="2025-02-13T19:03:08.201411539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:08.201625 containerd[1463]: time="2025-02-13T19:03:08.201454179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:08.201625 containerd[1463]: time="2025-02-13T19:03:08.201464949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:08.201625 containerd[1463]: time="2025-02-13T19:03:08.201529644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:08.201625 containerd[1463]: time="2025-02-13T19:03:08.201365800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:08.201625 containerd[1463]: time="2025-02-13T19:03:08.201446391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:08.201625 containerd[1463]: time="2025-02-13T19:03:08.201462485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:08.201625 containerd[1463]: time="2025-02-13T19:03:08.201534095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:08.293269 systemd[1]: Started cri-containerd-0b694794fe1202f798000d47910ef53b6af1b450953839224cd23dd2c2fcf597.scope - libcontainer container 0b694794fe1202f798000d47910ef53b6af1b450953839224cd23dd2c2fcf597. Feb 13 19:03:08.294883 systemd[1]: Started cri-containerd-7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad.scope - libcontainer container 7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad. Feb 13 19:03:08.318359 containerd[1463]: time="2025-02-13T19:03:08.318180939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ggpwv,Uid:6c9a004d-6a92-4214-9c2a-ea634fe6f451,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\"" Feb 13 19:03:08.321417 containerd[1463]: time="2025-02-13T19:03:08.321364441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqq44,Uid:cb98fba7-7eb7-49fc-946f-6f78d55db2b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b694794fe1202f798000d47910ef53b6af1b450953839224cd23dd2c2fcf597\"" Feb 13 19:03:08.321536 containerd[1463]: time="2025-02-13T19:03:08.321380019Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:03:09.196532 kubelet[1775]: E0213 19:03:09.196466 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:10.197081 kubelet[1775]: E0213 19:03:10.196960 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:11.197967 kubelet[1775]: E0213 19:03:11.197910 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:11.595366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379864578.mount: Deactivated successfully. Feb 13 19:03:12.198316 kubelet[1775]: E0213 19:03:12.198279 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:13.199456 kubelet[1775]: E0213 19:03:13.199411 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:13.291040 containerd[1463]: time="2025-02-13T19:03:13.290971532Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:13.291756 containerd[1463]: time="2025-02-13T19:03:13.291694186Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:03:13.292666 containerd[1463]: time="2025-02-13T19:03:13.292632278Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:13.294328 containerd[1463]: time="2025-02-13T19:03:13.294291708Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.972815848s" Feb 13 19:03:13.294423 containerd[1463]: time="2025-02-13T19:03:13.294329821Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:03:13.296436 containerd[1463]: time="2025-02-13T19:03:13.296339276Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:03:13.297633 containerd[1463]: time="2025-02-13T19:03:13.297574291Z" level=info msg="CreateContainer within sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:03:13.316347 containerd[1463]: time="2025-02-13T19:03:13.316289599Z" level=info msg="CreateContainer within sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7\"" Feb 13 19:03:13.317365 containerd[1463]: time="2025-02-13T19:03:13.317326479Z" level=info msg="StartContainer for \"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7\"" Feb 13 19:03:13.337075 systemd[1]: run-containerd-runc-k8s.io-f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7-runc.sbpkxn.mount: Deactivated successfully. Feb 13 19:03:13.349301 systemd[1]: Started cri-containerd-f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7.scope - libcontainer container f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7. Feb 13 19:03:13.376448 containerd[1463]: time="2025-02-13T19:03:13.376397146Z" level=info msg="StartContainer for \"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7\" returns successfully" Feb 13 19:03:13.489996 systemd[1]: cri-containerd-f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7.scope: Deactivated successfully. Feb 13 19:03:13.635294 containerd[1463]: time="2025-02-13T19:03:13.635233757Z" level=info msg="shim disconnected" id=f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7 namespace=k8s.io Feb 13 19:03:13.635294 containerd[1463]: time="2025-02-13T19:03:13.635289730Z" level=warning msg="cleaning up after shim disconnected" id=f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7 namespace=k8s.io Feb 13 19:03:13.635294 containerd[1463]: time="2025-02-13T19:03:13.635298141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:14.199615 kubelet[1775]: E0213 19:03:14.199581 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:14.314821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7-rootfs.mount: Deactivated successfully. Feb 13 19:03:14.385175 containerd[1463]: time="2025-02-13T19:03:14.385134655Z" level=info msg="CreateContainer within sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:03:14.404330 containerd[1463]: time="2025-02-13T19:03:14.404276585Z" level=info msg="CreateContainer within sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1\"" Feb 13 19:03:14.405584 containerd[1463]: time="2025-02-13T19:03:14.405532617Z" level=info msg="StartContainer for \"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1\"" Feb 13 19:03:14.436368 systemd[1]: Started cri-containerd-8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1.scope - libcontainer container 8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1. Feb 13 19:03:14.464225 containerd[1463]: time="2025-02-13T19:03:14.463954762Z" level=info msg="StartContainer for \"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1\" returns successfully" Feb 13 19:03:14.493476 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:03:14.493975 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:14.494487 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:14.499414 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:14.499619 systemd[1]: cri-containerd-8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1.scope: Deactivated successfully. Feb 13 19:03:14.517251 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:14.572020 containerd[1463]: time="2025-02-13T19:03:14.571481364Z" level=info msg="shim disconnected" id=8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1 namespace=k8s.io Feb 13 19:03:14.572020 containerd[1463]: time="2025-02-13T19:03:14.571547490Z" level=warning msg="cleaning up after shim disconnected" id=8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1 namespace=k8s.io Feb 13 19:03:14.572020 containerd[1463]: time="2025-02-13T19:03:14.571557181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:14.584377 containerd[1463]: time="2025-02-13T19:03:14.584328851Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:03:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:03:14.853472 containerd[1463]: time="2025-02-13T19:03:14.853347441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:14.854418 containerd[1463]: time="2025-02-13T19:03:14.854368444Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:03:14.856223 containerd[1463]: time="2025-02-13T19:03:14.856162497Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:14.858433 containerd[1463]: time="2025-02-13T19:03:14.858367824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:14.859055 containerd[1463]: time="2025-02-13T19:03:14.858964034Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.56252146s" Feb 13 19:03:14.859055 containerd[1463]: time="2025-02-13T19:03:14.858999170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:03:14.860996 containerd[1463]: time="2025-02-13T19:03:14.860954949Z" level=info msg="CreateContainer within sandbox \"0b694794fe1202f798000d47910ef53b6af1b450953839224cd23dd2c2fcf597\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:03:14.883309 containerd[1463]: time="2025-02-13T19:03:14.883253493Z" level=info msg="CreateContainer within sandbox \"0b694794fe1202f798000d47910ef53b6af1b450953839224cd23dd2c2fcf597\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4ae9d2375ea8d068fa0bed4cd020f2c5ba5205c3c41f1b67d661a9a31707d8ee\"" Feb 13 19:03:14.884111 containerd[1463]: time="2025-02-13T19:03:14.883817518Z" level=info msg="StartContainer for \"4ae9d2375ea8d068fa0bed4cd020f2c5ba5205c3c41f1b67d661a9a31707d8ee\"" Feb 13 19:03:14.913326 systemd[1]: Started cri-containerd-4ae9d2375ea8d068fa0bed4cd020f2c5ba5205c3c41f1b67d661a9a31707d8ee.scope - libcontainer container 4ae9d2375ea8d068fa0bed4cd020f2c5ba5205c3c41f1b67d661a9a31707d8ee. Feb 13 19:03:14.955163 containerd[1463]: time="2025-02-13T19:03:14.954461745Z" level=info msg="StartContainer for \"4ae9d2375ea8d068fa0bed4cd020f2c5ba5205c3c41f1b67d661a9a31707d8ee\" returns successfully" Feb 13 19:03:15.200576 kubelet[1775]: E0213 19:03:15.200464 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:15.313943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1-rootfs.mount: Deactivated successfully. Feb 13 19:03:15.314414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357134410.mount: Deactivated successfully. Feb 13 19:03:15.394448 containerd[1463]: time="2025-02-13T19:03:15.394398289Z" level=info msg="CreateContainer within sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:03:15.445879 containerd[1463]: time="2025-02-13T19:03:15.445759974Z" level=info msg="CreateContainer within sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8\"" Feb 13 19:03:15.447116 containerd[1463]: time="2025-02-13T19:03:15.446220751Z" level=info msg="StartContainer for \"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8\"" Feb 13 19:03:15.478674 systemd[1]: Started cri-containerd-ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8.scope - libcontainer container ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8. Feb 13 19:03:15.505679 containerd[1463]: time="2025-02-13T19:03:15.505635001Z" level=info msg="StartContainer for \"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8\" returns successfully" Feb 13 19:03:15.544180 systemd[1]: cri-containerd-ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8.scope: Deactivated successfully. Feb 13 19:03:15.682398 containerd[1463]: time="2025-02-13T19:03:15.682319750Z" level=info msg="shim disconnected" id=ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8 namespace=k8s.io Feb 13 19:03:15.682398 containerd[1463]: time="2025-02-13T19:03:15.682375127Z" level=warning msg="cleaning up after shim disconnected" id=ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8 namespace=k8s.io Feb 13 19:03:15.682398 containerd[1463]: time="2025-02-13T19:03:15.682386618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:16.201927 kubelet[1775]: E0213 19:03:16.201869 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:16.312857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8-rootfs.mount: Deactivated successfully. Feb 13 19:03:16.400553 containerd[1463]: time="2025-02-13T19:03:16.400511654Z" level=info msg="CreateContainer within sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:03:16.482002 kubelet[1775]: I0213 19:03:16.481815 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dqq44" podStartSLOduration=4.944551046 podStartE2EDuration="11.481797334s" podCreationTimestamp="2025-02-13 19:03:05 +0000 UTC" firstStartedPulling="2025-02-13 19:03:08.322405806 +0000 UTC m=+4.126249158" lastFinishedPulling="2025-02-13 19:03:14.859652054 +0000 UTC m=+10.663495446" observedRunningTime="2025-02-13 19:03:15.445519113 +0000 UTC m=+11.249362505" watchObservedRunningTime="2025-02-13 19:03:16.481797334 +0000 UTC m=+12.285640686" Feb 13 19:03:16.494754 containerd[1463]: time="2025-02-13T19:03:16.494644560Z" level=info msg="CreateContainer within sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718\"" Feb 13 19:03:16.495227 containerd[1463]: time="2025-02-13T19:03:16.495134619Z" level=info msg="StartContainer for \"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718\"" Feb 13 19:03:16.520244 systemd[1]: Started cri-containerd-a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718.scope - libcontainer container a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718. Feb 13 19:03:16.541343 systemd[1]: cri-containerd-a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718.scope: Deactivated successfully. Feb 13 19:03:16.543695 containerd[1463]: time="2025-02-13T19:03:16.543569200Z" level=info msg="StartContainer for \"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718\" returns successfully" Feb 13 19:03:16.570631 containerd[1463]: time="2025-02-13T19:03:16.570568961Z" level=info msg="shim disconnected" id=a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718 namespace=k8s.io Feb 13 19:03:16.570631 containerd[1463]: time="2025-02-13T19:03:16.570627390Z" level=warning msg="cleaning up after shim disconnected" id=a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718 namespace=k8s.io Feb 13 19:03:16.570631 containerd[1463]: time="2025-02-13T19:03:16.570635651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:17.202394 kubelet[1775]: E0213 19:03:17.202336 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:17.312882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718-rootfs.mount: Deactivated successfully. Feb 13 19:03:17.409329 containerd[1463]: time="2025-02-13T19:03:17.409288612Z" level=info msg="CreateContainer within sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:03:17.435634 containerd[1463]: time="2025-02-13T19:03:17.435573173Z" level=info msg="CreateContainer within sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\"" Feb 13 19:03:17.436259 containerd[1463]: time="2025-02-13T19:03:17.436179742Z" level=info msg="StartContainer for \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\"" Feb 13 19:03:17.471274 systemd[1]: Started cri-containerd-d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa.scope - libcontainer container d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa. Feb 13 19:03:17.501997 containerd[1463]: time="2025-02-13T19:03:17.501939668Z" level=info msg="StartContainer for \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\" returns successfully" Feb 13 19:03:17.591599 kubelet[1775]: I0213 19:03:17.591470 1775 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:03:18.168375 kernel: Initializing XFRM netlink socket Feb 13 19:03:18.203108 kubelet[1775]: E0213 19:03:18.203050 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:18.453885 kubelet[1775]: I0213 19:03:18.453755 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ggpwv" podStartSLOduration=8.478921314 podStartE2EDuration="13.453737469s" podCreationTimestamp="2025-02-13 19:03:05 +0000 UTC" firstStartedPulling="2025-02-13 19:03:08.320537788 +0000 UTC m=+4.124381180" lastFinishedPulling="2025-02-13 19:03:13.295353943 +0000 UTC m=+9.099197335" observedRunningTime="2025-02-13 19:03:18.448330282 +0000 UTC m=+14.252173634" watchObservedRunningTime="2025-02-13 19:03:18.453737469 +0000 UTC m=+14.257580861" Feb 13 19:03:19.203400 kubelet[1775]: E0213 19:03:19.203352 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:19.809447 systemd-networkd[1398]: cilium_host: Link UP Feb 13 19:03:19.809566 systemd-networkd[1398]: cilium_net: Link UP Feb 13 19:03:19.809702 systemd-networkd[1398]: cilium_net: Gained carrier Feb 13 19:03:19.809812 systemd-networkd[1398]: cilium_host: Gained carrier Feb 13 19:03:19.809915 systemd-networkd[1398]: cilium_net: Gained IPv6LL Feb 13 19:03:19.810027 systemd-networkd[1398]: cilium_host: Gained IPv6LL Feb 13 19:03:19.894179 systemd-networkd[1398]: cilium_vxlan: Link UP Feb 13 19:03:19.894185 systemd-networkd[1398]: cilium_vxlan: Gained carrier Feb 13 19:03:20.201143 kernel: NET: Registered PF_ALG protocol family Feb 13 19:03:20.204361 kubelet[1775]: E0213 19:03:20.204325 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:20.772207 systemd-networkd[1398]: lxc_health: Link UP Feb 13 19:03:20.774124 systemd-networkd[1398]: lxc_health: Gained carrier Feb 13 19:03:21.204681 kubelet[1775]: E0213 19:03:21.204561 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:21.766213 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Feb 13 19:03:21.872728 kubelet[1775]: I0213 19:03:21.872667 1775 topology_manager.go:215] "Topology Admit Handler" podUID="ad802c36-b8f8-4ae3-8275-6f5074d7f059" podNamespace="default" podName="nginx-deployment-85f456d6dd-x6mz8" Feb 13 19:03:21.877873 systemd[1]: Created slice kubepods-besteffort-podad802c36_b8f8_4ae3_8275_6f5074d7f059.slice - libcontainer container kubepods-besteffort-podad802c36_b8f8_4ae3_8275_6f5074d7f059.slice. Feb 13 19:03:21.917129 kubelet[1775]: I0213 19:03:21.917086 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462x2\" (UniqueName: \"kubernetes.io/projected/ad802c36-b8f8-4ae3-8275-6f5074d7f059-kube-api-access-462x2\") pod \"nginx-deployment-85f456d6dd-x6mz8\" (UID: \"ad802c36-b8f8-4ae3-8275-6f5074d7f059\") " pod="default/nginx-deployment-85f456d6dd-x6mz8" Feb 13 19:03:22.181059 containerd[1463]: time="2025-02-13T19:03:22.180728885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-x6mz8,Uid:ad802c36-b8f8-4ae3-8275-6f5074d7f059,Namespace:default,Attempt:0,}" Feb 13 19:03:22.205021 kubelet[1775]: E0213 19:03:22.204972 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:22.292187 systemd-networkd[1398]: lxccb38bf02bb1a: Link UP Feb 13 19:03:22.299044 kernel: eth0: renamed from tmp4674c Feb 13 19:03:22.302646 systemd-networkd[1398]: lxccb38bf02bb1a: Gained carrier Feb 13 19:03:22.789193 systemd-networkd[1398]: lxc_health: Gained IPv6LL Feb 13 19:03:23.205810 kubelet[1775]: E0213 19:03:23.205746 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:24.206838 kubelet[1775]: E0213 19:03:24.206776 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:24.325237 systemd-networkd[1398]: lxccb38bf02bb1a: Gained IPv6LL Feb 13 19:03:25.195118 kubelet[1775]: E0213 19:03:25.194335 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:25.207722 kubelet[1775]: E0213 19:03:25.207687 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:25.316046 containerd[1463]: time="2025-02-13T19:03:25.315930632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:25.316426 containerd[1463]: time="2025-02-13T19:03:25.316008180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:25.316426 containerd[1463]: time="2025-02-13T19:03:25.316049472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:25.316611 containerd[1463]: time="2025-02-13T19:03:25.316524672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:25.344307 systemd[1]: Started cri-containerd-4674c285b50574da49d8de682f4ba55b252d129648a031934426a786c9114542.scope - libcontainer container 4674c285b50574da49d8de682f4ba55b252d129648a031934426a786c9114542. Feb 13 19:03:25.354417 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:03:25.372045 containerd[1463]: time="2025-02-13T19:03:25.372007106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-x6mz8,Uid:ad802c36-b8f8-4ae3-8275-6f5074d7f059,Namespace:default,Attempt:0,} returns sandbox id \"4674c285b50574da49d8de682f4ba55b252d129648a031934426a786c9114542\"" Feb 13 19:03:25.373542 containerd[1463]: time="2025-02-13T19:03:25.373513491Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:03:26.208442 kubelet[1775]: E0213 19:03:26.208400 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:27.210233 kubelet[1775]: E0213 19:03:27.210149 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:27.300619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount561596118.mount: Deactivated successfully. Feb 13 19:03:28.052458 containerd[1463]: time="2025-02-13T19:03:28.052408506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:28.053412 containerd[1463]: time="2025-02-13T19:03:28.053227786Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:03:28.054160 containerd[1463]: time="2025-02-13T19:03:28.054100873Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:28.056855 containerd[1463]: time="2025-02-13T19:03:28.056825992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:28.058027 containerd[1463]: time="2025-02-13T19:03:28.057987082Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 2.684341634s" Feb 13 19:03:28.058027 containerd[1463]: time="2025-02-13T19:03:28.058023127Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:03:28.060034 containerd[1463]: time="2025-02-13T19:03:28.059989495Z" level=info msg="CreateContainer within sandbox \"4674c285b50574da49d8de682f4ba55b252d129648a031934426a786c9114542\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:03:28.071173 containerd[1463]: time="2025-02-13T19:03:28.071132205Z" level=info msg="CreateContainer within sandbox \"4674c285b50574da49d8de682f4ba55b252d129648a031934426a786c9114542\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a0dcde8dc9c7857ec50174058d1d6242ae2c8ff37e3539bca04d4a7af4d72725\"" Feb 13 19:03:28.071835 containerd[1463]: time="2025-02-13T19:03:28.071779179Z" level=info msg="StartContainer for \"a0dcde8dc9c7857ec50174058d1d6242ae2c8ff37e3539bca04d4a7af4d72725\"" Feb 13 19:03:28.099230 systemd[1]: Started cri-containerd-a0dcde8dc9c7857ec50174058d1d6242ae2c8ff37e3539bca04d4a7af4d72725.scope - libcontainer container a0dcde8dc9c7857ec50174058d1d6242ae2c8ff37e3539bca04d4a7af4d72725. Feb 13 19:03:28.119584 containerd[1463]: time="2025-02-13T19:03:28.119543047Z" level=info msg="StartContainer for \"a0dcde8dc9c7857ec50174058d1d6242ae2c8ff37e3539bca04d4a7af4d72725\" returns successfully" Feb 13 19:03:28.211117 kubelet[1775]: E0213 19:03:28.211050 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:29.211727 kubelet[1775]: E0213 19:03:29.211678 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:30.212194 kubelet[1775]: E0213 19:03:30.212153 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:31.212359 kubelet[1775]: E0213 19:03:31.212269 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:32.212625 kubelet[1775]: E0213 19:03:32.212580 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:33.213780 kubelet[1775]: E0213 19:03:33.213708 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:33.645436 kubelet[1775]: I0213 19:03:33.645372 1775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:03:33.646275 kubelet[1775]: E0213 19:03:33.646215 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:33.696633 kubelet[1775]: I0213 19:03:33.696579 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-x6mz8" podStartSLOduration=10.011018178 podStartE2EDuration="12.696560043s" podCreationTimestamp="2025-02-13 19:03:21 +0000 UTC" firstStartedPulling="2025-02-13 19:03:25.373265858 +0000 UTC m=+21.177109249" lastFinishedPulling="2025-02-13 19:03:28.058807722 +0000 UTC m=+23.862651114" observedRunningTime="2025-02-13 19:03:28.435634727 +0000 UTC m=+24.239478079" watchObservedRunningTime="2025-02-13 19:03:33.696560043 +0000 UTC m=+29.500403435" Feb 13 19:03:34.213945 kubelet[1775]: E0213 19:03:34.213885 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:34.323279 kubelet[1775]: I0213 19:03:34.323237 1775 topology_manager.go:215] "Topology Admit Handler" podUID="9d393e19-8877-475c-b94b-46d4262211f5" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 19:03:34.329904 systemd[1]: Created slice kubepods-besteffort-pod9d393e19_8877_475c_b94b_46d4262211f5.slice - libcontainer container kubepods-besteffort-pod9d393e19_8877_475c_b94b_46d4262211f5.slice. Feb 13 19:03:34.387644 kubelet[1775]: I0213 19:03:34.387597 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9d393e19-8877-475c-b94b-46d4262211f5-data\") pod \"nfs-server-provisioner-0\" (UID: \"9d393e19-8877-475c-b94b-46d4262211f5\") " pod="default/nfs-server-provisioner-0" Feb 13 19:03:34.387644 kubelet[1775]: I0213 19:03:34.387645 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqdx\" (UniqueName: \"kubernetes.io/projected/9d393e19-8877-475c-b94b-46d4262211f5-kube-api-access-lpqdx\") pod \"nfs-server-provisioner-0\" (UID: \"9d393e19-8877-475c-b94b-46d4262211f5\") " pod="default/nfs-server-provisioner-0" Feb 13 19:03:34.436621 kubelet[1775]: E0213 19:03:34.436587 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:03:34.635140 containerd[1463]: time="2025-02-13T19:03:34.634097951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9d393e19-8877-475c-b94b-46d4262211f5,Namespace:default,Attempt:0,}" Feb 13 19:03:34.681573 systemd-networkd[1398]: lxca50376ca264b: Link UP Feb 13 19:03:34.682260 kernel: eth0: renamed from tmp2cdcf Feb 13 19:03:34.700835 systemd-networkd[1398]: lxca50376ca264b: Gained carrier Feb 13 19:03:34.902730 containerd[1463]: time="2025-02-13T19:03:34.902479411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:34.902730 containerd[1463]: time="2025-02-13T19:03:34.902583622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:34.902730 containerd[1463]: time="2025-02-13T19:03:34.902599943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:34.902730 containerd[1463]: time="2025-02-13T19:03:34.902687152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:34.921263 systemd[1]: Started cri-containerd-2cdcf2bee3c9713c88e071de4343b0cc2d91d4e12638c9f50727891ada11fd42.scope - libcontainer container 2cdcf2bee3c9713c88e071de4343b0cc2d91d4e12638c9f50727891ada11fd42. Feb 13 19:03:34.931051 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:03:34.946043 containerd[1463]: time="2025-02-13T19:03:34.945973691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9d393e19-8877-475c-b94b-46d4262211f5,Namespace:default,Attempt:0,} returns sandbox id \"2cdcf2bee3c9713c88e071de4343b0cc2d91d4e12638c9f50727891ada11fd42\"" Feb 13 19:03:34.947737 containerd[1463]: time="2025-02-13T19:03:34.947635585Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:03:35.214231 kubelet[1775]: E0213 19:03:35.214059 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:36.214321 kubelet[1775]: E0213 19:03:36.214266 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:36.678376 systemd-networkd[1398]: lxca50376ca264b: Gained IPv6LL Feb 13 19:03:36.933392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694061267.mount: Deactivated successfully. Feb 13 19:03:37.214719 kubelet[1775]: E0213 19:03:37.214591 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:38.215564 kubelet[1775]: E0213 19:03:38.215518 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:38.306679 containerd[1463]: time="2025-02-13T19:03:38.306617016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:38.307768 containerd[1463]: time="2025-02-13T19:03:38.307708469Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Feb 13 19:03:38.309076 containerd[1463]: time="2025-02-13T19:03:38.309034501Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:38.311700 containerd[1463]: time="2025-02-13T19:03:38.311659764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:38.312984 containerd[1463]: time="2025-02-13T19:03:38.312945233Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.365237321s" Feb 13 19:03:38.313030 containerd[1463]: time="2025-02-13T19:03:38.312986877Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:03:38.315621 containerd[1463]: time="2025-02-13T19:03:38.315435485Z" level=info msg="CreateContainer within sandbox \"2cdcf2bee3c9713c88e071de4343b0cc2d91d4e12638c9f50727891ada11fd42\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:03:38.328336 containerd[1463]: time="2025-02-13T19:03:38.328285856Z" level=info msg="CreateContainer within sandbox \"2cdcf2bee3c9713c88e071de4343b0cc2d91d4e12638c9f50727891ada11fd42\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"96bf3ebc0a4b78d6aaed94881004eab2c2ba10915041ad03a3acd4caa663068a\"" Feb 13 19:03:38.328870 containerd[1463]: time="2025-02-13T19:03:38.328838423Z" level=info msg="StartContainer for \"96bf3ebc0a4b78d6aaed94881004eab2c2ba10915041ad03a3acd4caa663068a\"" Feb 13 19:03:38.414179 systemd[1]: run-containerd-runc-k8s.io-96bf3ebc0a4b78d6aaed94881004eab2c2ba10915041ad03a3acd4caa663068a-runc.axcyGO.mount: Deactivated successfully. Feb 13 19:03:38.424277 systemd[1]: Started cri-containerd-96bf3ebc0a4b78d6aaed94881004eab2c2ba10915041ad03a3acd4caa663068a.scope - libcontainer container 96bf3ebc0a4b78d6aaed94881004eab2c2ba10915041ad03a3acd4caa663068a. Feb 13 19:03:38.529454 containerd[1463]: time="2025-02-13T19:03:38.529328167Z" level=info msg="StartContainer for \"96bf3ebc0a4b78d6aaed94881004eab2c2ba10915041ad03a3acd4caa663068a\" returns successfully" Feb 13 19:03:39.216683 kubelet[1775]: E0213 19:03:39.216640 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:39.469048 kubelet[1775]: I0213 19:03:39.468776 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.102188355 podStartE2EDuration="5.468760676s" podCreationTimestamp="2025-02-13 19:03:34 +0000 UTC" firstStartedPulling="2025-02-13 19:03:34.947350435 +0000 UTC m=+30.751193827" lastFinishedPulling="2025-02-13 19:03:38.313922756 +0000 UTC m=+34.117766148" observedRunningTime="2025-02-13 19:03:39.468144746 +0000 UTC m=+35.271988138" watchObservedRunningTime="2025-02-13 19:03:39.468760676 +0000 UTC m=+35.272604068" Feb 13 19:03:40.071877 update_engine[1446]: I20250213 19:03:40.071781 1446 update_attempter.cc:509] Updating boot flags... Feb 13 19:03:40.106095 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3174) Feb 13 19:03:40.132219 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3178) Feb 13 19:03:40.171135 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3178) Feb 13 19:03:40.217677 kubelet[1775]: E0213 19:03:40.217618 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:41.218318 kubelet[1775]: E0213 19:03:41.218265 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:42.218760 kubelet[1775]: E0213 19:03:42.218708 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:43.218896 kubelet[1775]: E0213 19:03:43.218835 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:44.219618 kubelet[1775]: E0213 19:03:44.219560 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:45.193909 kubelet[1775]: E0213 19:03:45.193861 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:45.220254 kubelet[1775]: E0213 19:03:45.220213 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:46.221258 kubelet[1775]: E0213 19:03:46.221203 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:47.221366 kubelet[1775]: E0213 19:03:47.221324 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:48.221835 kubelet[1775]: E0213 19:03:48.221787 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:48.735607 kubelet[1775]: I0213 19:03:48.735562 1775 topology_manager.go:215] "Topology Admit Handler" podUID="0403a786-a91b-4ddc-b6c6-e7852120c02d" podNamespace="default" podName="test-pod-1" Feb 13 19:03:48.740382 systemd[1]: Created slice kubepods-besteffort-pod0403a786_a91b_4ddc_b6c6_e7852120c02d.slice - libcontainer container kubepods-besteffort-pod0403a786_a91b_4ddc_b6c6_e7852120c02d.slice. Feb 13 19:03:48.868636 kubelet[1775]: I0213 19:03:48.868564 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brjgf\" (UniqueName: \"kubernetes.io/projected/0403a786-a91b-4ddc-b6c6-e7852120c02d-kube-api-access-brjgf\") pod \"test-pod-1\" (UID: \"0403a786-a91b-4ddc-b6c6-e7852120c02d\") " pod="default/test-pod-1" Feb 13 19:03:48.868636 kubelet[1775]: I0213 19:03:48.868610 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-356407ac-29c4-40e2-88bf-1277b1dd8ea5\" (UniqueName: \"kubernetes.io/nfs/0403a786-a91b-4ddc-b6c6-e7852120c02d-pvc-356407ac-29c4-40e2-88bf-1277b1dd8ea5\") pod \"test-pod-1\" (UID: \"0403a786-a91b-4ddc-b6c6-e7852120c02d\") " pod="default/test-pod-1" Feb 13 19:03:48.991113 kernel: FS-Cache: Loaded Feb 13 19:03:49.021435 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:03:49.021604 kernel: RPC: Registered udp transport module. Feb 13 19:03:49.021626 kernel: RPC: Registered tcp transport module. Feb 13 19:03:49.021657 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:03:49.021677 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:03:49.189325 kernel: NFS: Registering the id_resolver key type Feb 13 19:03:49.189492 kernel: Key type id_resolver registered Feb 13 19:03:49.189514 kernel: Key type id_legacy registered Feb 13 19:03:49.222368 kubelet[1775]: E0213 19:03:49.222306 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:49.238237 nfsidmap[3202]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:03:49.241736 nfsidmap[3205]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:03:49.343470 containerd[1463]: time="2025-02-13T19:03:49.343433283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0403a786-a91b-4ddc-b6c6-e7852120c02d,Namespace:default,Attempt:0,}" Feb 13 19:03:49.385892 systemd-networkd[1398]: lxcb09d0495136d: Link UP Feb 13 19:03:49.386381 kernel: eth0: renamed from tmp32505 Feb 13 19:03:49.391103 systemd-networkd[1398]: lxcb09d0495136d: Gained carrier Feb 13 19:03:49.583576 containerd[1463]: time="2025-02-13T19:03:49.582816891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:49.583576 containerd[1463]: time="2025-02-13T19:03:49.583549168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:49.583576 containerd[1463]: time="2025-02-13T19:03:49.583561889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:49.584927 containerd[1463]: time="2025-02-13T19:03:49.583654854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:49.604264 systemd[1]: Started cri-containerd-32505355033c4896231067f15ef7705a3b0e2ad930d6890437146d0abe8dd456.scope - libcontainer container 32505355033c4896231067f15ef7705a3b0e2ad930d6890437146d0abe8dd456. Feb 13 19:03:49.617599 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:03:49.635557 containerd[1463]: time="2025-02-13T19:03:49.635495152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0403a786-a91b-4ddc-b6c6-e7852120c02d,Namespace:default,Attempt:0,} returns sandbox id \"32505355033c4896231067f15ef7705a3b0e2ad930d6890437146d0abe8dd456\"" Feb 13 19:03:49.637572 containerd[1463]: time="2025-02-13T19:03:49.637341605Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:03:49.950384 containerd[1463]: time="2025-02-13T19:03:49.950267087Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:49.952396 containerd[1463]: time="2025-02-13T19:03:49.952334312Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:03:49.954515 containerd[1463]: time="2025-02-13T19:03:49.954465659Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 317.092533ms" Feb 13 19:03:49.954515 containerd[1463]: time="2025-02-13T19:03:49.954499061Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:03:49.956372 containerd[1463]: time="2025-02-13T19:03:49.956240549Z" level=info msg="CreateContainer within sandbox \"32505355033c4896231067f15ef7705a3b0e2ad930d6890437146d0abe8dd456\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:03:49.967421 containerd[1463]: time="2025-02-13T19:03:49.967357550Z" level=info msg="CreateContainer within sandbox \"32505355033c4896231067f15ef7705a3b0e2ad930d6890437146d0abe8dd456\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"394d981e16b7cce5af1a54ad831096e9f6ee42feed4c5525a38ea917fecbe72c\"" Feb 13 19:03:49.968721 containerd[1463]: time="2025-02-13T19:03:49.968693258Z" level=info msg="StartContainer for \"394d981e16b7cce5af1a54ad831096e9f6ee42feed4c5525a38ea917fecbe72c\"" Feb 13 19:03:50.000288 systemd[1]: Started cri-containerd-394d981e16b7cce5af1a54ad831096e9f6ee42feed4c5525a38ea917fecbe72c.scope - libcontainer container 394d981e16b7cce5af1a54ad831096e9f6ee42feed4c5525a38ea917fecbe72c. Feb 13 19:03:50.035367 containerd[1463]: time="2025-02-13T19:03:50.035290070Z" level=info msg="StartContainer for \"394d981e16b7cce5af1a54ad831096e9f6ee42feed4c5525a38ea917fecbe72c\" returns successfully" Feb 13 19:03:50.222573 kubelet[1775]: E0213 19:03:50.222440 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:50.495035 kubelet[1775]: I0213 19:03:50.494761 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.176552207 podStartE2EDuration="16.494743876s" podCreationTimestamp="2025-02-13 19:03:34 +0000 UTC" firstStartedPulling="2025-02-13 19:03:49.636894582 +0000 UTC m=+45.440737974" lastFinishedPulling="2025-02-13 19:03:49.955086291 +0000 UTC m=+45.758929643" observedRunningTime="2025-02-13 19:03:50.494220851 +0000 UTC m=+46.298064243" watchObservedRunningTime="2025-02-13 19:03:50.494743876 +0000 UTC m=+46.298587268" Feb 13 19:03:51.205441 systemd-networkd[1398]: lxcb09d0495136d: Gained IPv6LL Feb 13 19:03:51.223395 kubelet[1775]: E0213 19:03:51.223342 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:52.224532 kubelet[1775]: E0213 19:03:52.224487 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:53.225078 kubelet[1775]: E0213 19:03:53.225039 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:54.225377 kubelet[1775]: E0213 19:03:54.225325 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:55.226362 kubelet[1775]: E0213 19:03:55.226308 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:56.227432 kubelet[1775]: E0213 19:03:56.227383 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:57.227694 kubelet[1775]: E0213 19:03:57.227661 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:58.229177 kubelet[1775]: E0213 19:03:58.229127 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:58.275138 containerd[1463]: time="2025-02-13T19:03:58.273332936Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:03:58.288738 containerd[1463]: time="2025-02-13T19:03:58.288684606Z" level=info msg="StopContainer for \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\" with timeout 2 (s)" Feb 13 19:03:58.289001 containerd[1463]: time="2025-02-13T19:03:58.288965056Z" level=info msg="Stop container \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\" with signal terminated" Feb 13 19:03:58.294907 systemd-networkd[1398]: lxc_health: Link DOWN Feb 13 19:03:58.294979 systemd-networkd[1398]: lxc_health: Lost carrier Feb 13 19:03:58.306085 systemd[1]: cri-containerd-d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa.scope: Deactivated successfully. Feb 13 19:03:58.306400 systemd[1]: cri-containerd-d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa.scope: Consumed 6.751s CPU time, 121.2M memory peak, 152K read from disk, 12.9M written to disk. Feb 13 19:03:58.323594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa-rootfs.mount: Deactivated successfully. Feb 13 19:03:58.333537 containerd[1463]: time="2025-02-13T19:03:58.333379528Z" level=info msg="shim disconnected" id=d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa namespace=k8s.io Feb 13 19:03:58.333537 containerd[1463]: time="2025-02-13T19:03:58.333436770Z" level=warning msg="cleaning up after shim disconnected" id=d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa namespace=k8s.io Feb 13 19:03:58.333537 containerd[1463]: time="2025-02-13T19:03:58.333445090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:58.346766 containerd[1463]: time="2025-02-13T19:03:58.346709086Z" level=info msg="StopContainer for \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\" returns successfully" Feb 13 19:03:58.347394 containerd[1463]: time="2025-02-13T19:03:58.347363709Z" level=info msg="StopPodSandbox for \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\"" Feb 13 19:03:58.347456 containerd[1463]: time="2025-02-13T19:03:58.347414311Z" level=info msg="Container to stop \"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:58.347456 containerd[1463]: time="2025-02-13T19:03:58.347426471Z" level=info msg="Container to stop \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:58.347456 containerd[1463]: time="2025-02-13T19:03:58.347435032Z" level=info msg="Container to stop \"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:58.347456 containerd[1463]: time="2025-02-13T19:03:58.347444912Z" level=info msg="Container to stop \"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:58.347456 containerd[1463]: time="2025-02-13T19:03:58.347453232Z" level=info msg="Container to stop \"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:58.349222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad-shm.mount: Deactivated successfully. Feb 13 19:03:58.353438 systemd[1]: cri-containerd-7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad.scope: Deactivated successfully. Feb 13 19:03:58.370555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad-rootfs.mount: Deactivated successfully. Feb 13 19:03:58.375411 containerd[1463]: time="2025-02-13T19:03:58.375341712Z" level=info msg="shim disconnected" id=7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad namespace=k8s.io Feb 13 19:03:58.375411 containerd[1463]: time="2025-02-13T19:03:58.375408314Z" level=warning msg="cleaning up after shim disconnected" id=7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad namespace=k8s.io Feb 13 19:03:58.375411 containerd[1463]: time="2025-02-13T19:03:58.375416754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:58.392293 containerd[1463]: time="2025-02-13T19:03:58.392121513Z" level=info msg="TearDown network for sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" successfully" Feb 13 19:03:58.392293 containerd[1463]: time="2025-02-13T19:03:58.392159234Z" level=info msg="StopPodSandbox for \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" returns successfully" Feb 13 19:03:58.514019 kubelet[1775]: I0213 19:03:58.513531 1775 scope.go:117] "RemoveContainer" containerID="d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa" Feb 13 19:03:58.515522 containerd[1463]: time="2025-02-13T19:03:58.515486654Z" level=info msg="RemoveContainer for \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\"" Feb 13 19:03:58.528384 kubelet[1775]: I0213 19:03:58.528353 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-host-proc-sys-kernel\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529199 kubelet[1775]: I0213 19:03:58.528397 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c9a004d-6a92-4214-9c2a-ea634fe6f451-clustermesh-secrets\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529199 kubelet[1775]: I0213 19:03:58.528418 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c9a004d-6a92-4214-9c2a-ea634fe6f451-hubble-tls\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529199 kubelet[1775]: I0213 19:03:58.528438 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-config-path\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529199 kubelet[1775]: I0213 19:03:58.528455 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r22qz\" (UniqueName: \"kubernetes.io/projected/6c9a004d-6a92-4214-9c2a-ea634fe6f451-kube-api-access-r22qz\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529199 kubelet[1775]: I0213 19:03:58.528471 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-xtables-lock\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529199 kubelet[1775]: I0213 19:03:58.528486 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-etc-cni-netd\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529341 kubelet[1775]: I0213 19:03:58.528479 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.529341 kubelet[1775]: I0213 19:03:58.528499 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-host-proc-sys-net\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529341 kubelet[1775]: I0213 19:03:58.528521 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.529341 kubelet[1775]: I0213 19:03:58.528578 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-lib-modules\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529341 kubelet[1775]: I0213 19:03:58.528603 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-hostproc\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529445 kubelet[1775]: I0213 19:03:58.528619 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-cgroup\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529445 kubelet[1775]: I0213 19:03:58.528634 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-bpf-maps\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529445 kubelet[1775]: I0213 19:03:58.528670 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-run\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529445 kubelet[1775]: I0213 19:03:58.528685 1775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cni-path\") pod \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\" (UID: \"6c9a004d-6a92-4214-9c2a-ea634fe6f451\") " Feb 13 19:03:58.529445 kubelet[1775]: I0213 19:03:58.528720 1775 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-host-proc-sys-kernel\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.529445 kubelet[1775]: I0213 19:03:58.528731 1775 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-host-proc-sys-net\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.529574 kubelet[1775]: I0213 19:03:58.528753 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cni-path" (OuterVolumeSpecName: "cni-path") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.529574 kubelet[1775]: I0213 19:03:58.528769 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.529574 kubelet[1775]: I0213 19:03:58.528785 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-hostproc" (OuterVolumeSpecName: "hostproc") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.529574 kubelet[1775]: I0213 19:03:58.528798 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.529574 kubelet[1775]: I0213 19:03:58.528815 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.529679 kubelet[1775]: I0213 19:03:58.528813 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.530041 kubelet[1775]: I0213 19:03:58.530000 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.530119 kubelet[1775]: I0213 19:03:58.530053 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:58.530768 kubelet[1775]: I0213 19:03:58.530735 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:03:58.532849 kubelet[1775]: I0213 19:03:58.532794 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9a004d-6a92-4214-9c2a-ea634fe6f451-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:03:58.533562 kubelet[1775]: I0213 19:03:58.533447 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9a004d-6a92-4214-9c2a-ea634fe6f451-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:03:58.533657 kubelet[1775]: I0213 19:03:58.533585 1775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9a004d-6a92-4214-9c2a-ea634fe6f451-kube-api-access-r22qz" (OuterVolumeSpecName: "kube-api-access-r22qz") pod "6c9a004d-6a92-4214-9c2a-ea634fe6f451" (UID: "6c9a004d-6a92-4214-9c2a-ea634fe6f451"). InnerVolumeSpecName "kube-api-access-r22qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:03:58.534474 systemd[1]: var-lib-kubelet-pods-6c9a004d\x2d6a92\x2d4214\x2d9c2a\x2dea634fe6f451-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:03:58.534591 systemd[1]: var-lib-kubelet-pods-6c9a004d\x2d6a92\x2d4214\x2d9c2a\x2dea634fe6f451-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:03:58.601374 containerd[1463]: time="2025-02-13T19:03:58.601334450Z" level=info msg="RemoveContainer for \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\" returns successfully" Feb 13 19:03:58.601683 kubelet[1775]: I0213 19:03:58.601656 1775 scope.go:117] "RemoveContainer" containerID="a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718" Feb 13 19:03:58.602783 containerd[1463]: time="2025-02-13T19:03:58.602747821Z" level=info msg="RemoveContainer for \"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718\"" Feb 13 19:03:58.605930 containerd[1463]: time="2025-02-13T19:03:58.605897534Z" level=info msg="RemoveContainer for \"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718\" returns successfully" Feb 13 19:03:58.606292 kubelet[1775]: I0213 19:03:58.606167 1775 scope.go:117] "RemoveContainer" containerID="ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8" Feb 13 19:03:58.607547 containerd[1463]: time="2025-02-13T19:03:58.607493711Z" level=info msg="RemoveContainer for \"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8\"" Feb 13 19:03:58.609998 containerd[1463]: time="2025-02-13T19:03:58.609964639Z" level=info msg="RemoveContainer for \"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8\" returns successfully" Feb 13 19:03:58.610262 kubelet[1775]: I0213 19:03:58.610238 1775 scope.go:117] "RemoveContainer" containerID="8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1" Feb 13 19:03:58.611405 containerd[1463]: time="2025-02-13T19:03:58.611376810Z" level=info msg="RemoveContainer for \"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1\"" Feb 13 19:03:58.613902 containerd[1463]: time="2025-02-13T19:03:58.613870699Z" level=info msg="RemoveContainer for \"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1\" returns successfully" Feb 13 19:03:58.614193 kubelet[1775]: I0213 19:03:58.614170 1775 scope.go:117] "RemoveContainer" containerID="f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7" Feb 13 19:03:58.615286 containerd[1463]: time="2025-02-13T19:03:58.615262389Z" level=info msg="RemoveContainer for \"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7\"" Feb 13 19:03:58.617901 containerd[1463]: time="2025-02-13T19:03:58.617845722Z" level=info msg="RemoveContainer for \"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7\" returns successfully" Feb 13 19:03:58.618167 kubelet[1775]: I0213 19:03:58.618054 1775 scope.go:117] "RemoveContainer" containerID="d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa" Feb 13 19:03:58.618480 containerd[1463]: time="2025-02-13T19:03:58.618373821Z" level=error msg="ContainerStatus for \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\": not found" Feb 13 19:03:58.618557 kubelet[1775]: E0213 19:03:58.618535 1775 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\": not found" containerID="d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa" Feb 13 19:03:58.618650 kubelet[1775]: I0213 19:03:58.618563 1775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa"} err="failed to get container status \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"d32afece20e9ea173426aa3ed6cbe65c778d4d026a38ac457a63bbf2c62f76aa\": not found" Feb 13 19:03:58.618650 kubelet[1775]: I0213 19:03:58.618650 1775 scope.go:117] "RemoveContainer" containerID="a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718" Feb 13 19:03:58.618992 containerd[1463]: time="2025-02-13T19:03:58.618924760Z" level=error msg="ContainerStatus for \"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718\": not found" Feb 13 19:03:58.619097 kubelet[1775]: E0213 19:03:58.619054 1775 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718\": not found" containerID="a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718" Feb 13 19:03:58.619130 kubelet[1775]: I0213 19:03:58.619099 1775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718"} err="failed to get container status \"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718\": rpc error: code = NotFound desc = an error occurred when try to find container \"a21c57edbda4385c2cbb72b1ba8de50f6d290e4bd5b425da5afa6275e31b5718\": not found" Feb 13 19:03:58.619130 kubelet[1775]: I0213 19:03:58.619118 1775 scope.go:117] "RemoveContainer" containerID="ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8" Feb 13 19:03:58.619316 containerd[1463]: time="2025-02-13T19:03:58.619287173Z" level=error msg="ContainerStatus for \"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8\": not found" Feb 13 19:03:58.619601 kubelet[1775]: E0213 19:03:58.619466 1775 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8\": not found" containerID="ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8" Feb 13 19:03:58.619601 kubelet[1775]: I0213 19:03:58.619494 1775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8"} err="failed to get container status \"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad90591289fe765138336f731ac983d53b7101c331e189ce1277efaf1356a2a8\": not found" Feb 13 19:03:58.619601 kubelet[1775]: I0213 19:03:58.619519 1775 scope.go:117] "RemoveContainer" containerID="8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1" Feb 13 19:03:58.619932 containerd[1463]: time="2025-02-13T19:03:58.619846793Z" level=error msg="ContainerStatus for \"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1\": not found" Feb 13 19:03:58.619990 kubelet[1775]: E0213 19:03:58.619968 1775 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1\": not found" containerID="8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1" Feb 13 19:03:58.620080 kubelet[1775]: I0213 19:03:58.619986 1775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1"} err="failed to get container status \"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"8dc5bc0334338ea480e9b94b463c5ed887027079ee104bcb8f2a05360a8a73a1\": not found" Feb 13 19:03:58.620080 kubelet[1775]: I0213 19:03:58.620029 1775 scope.go:117] "RemoveContainer" containerID="f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7" Feb 13 19:03:58.620244 containerd[1463]: time="2025-02-13T19:03:58.620210966Z" level=error msg="ContainerStatus for \"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7\": not found" Feb 13 19:03:58.620349 kubelet[1775]: E0213 19:03:58.620320 1775 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7\": not found" containerID="f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7" Feb 13 19:03:58.620384 kubelet[1775]: I0213 19:03:58.620346 1775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7"} err="failed to get container status \"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"f61bc1fde332a75b23d9309f5aa616026ddc344c2bd132c1ebabadcb353c6bb7\": not found" Feb 13 19:03:58.629720 kubelet[1775]: I0213 19:03:58.629584 1775 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-run\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.629720 kubelet[1775]: I0213 19:03:58.629616 1775 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cni-path\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.629720 kubelet[1775]: I0213 19:03:58.629626 1775 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c9a004d-6a92-4214-9c2a-ea634fe6f451-clustermesh-secrets\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.629720 kubelet[1775]: I0213 19:03:58.629636 1775 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c9a004d-6a92-4214-9c2a-ea634fe6f451-hubble-tls\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.629720 kubelet[1775]: I0213 19:03:58.629645 1775 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-config-path\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.629720 kubelet[1775]: I0213 19:03:58.629654 1775 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-r22qz\" (UniqueName: \"kubernetes.io/projected/6c9a004d-6a92-4214-9c2a-ea634fe6f451-kube-api-access-r22qz\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.629720 kubelet[1775]: I0213 19:03:58.629662 1775 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-xtables-lock\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.629720 kubelet[1775]: I0213 19:03:58.629669 1775 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-etc-cni-netd\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.629970 kubelet[1775]: I0213 19:03:58.629676 1775 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-lib-modules\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.629970 kubelet[1775]: I0213 19:03:58.629685 1775 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-hostproc\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.629970 kubelet[1775]: I0213 19:03:58.629692 1775 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-cilium-cgroup\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.629970 kubelet[1775]: I0213 19:03:58.629701 1775 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c9a004d-6a92-4214-9c2a-ea634fe6f451-bpf-maps\") on node \"10.0.0.44\" DevicePath \"\"" Feb 13 19:03:58.818485 systemd[1]: Removed slice kubepods-burstable-pod6c9a004d_6a92_4214_9c2a_ea634fe6f451.slice - libcontainer container kubepods-burstable-pod6c9a004d_6a92_4214_9c2a_ea634fe6f451.slice. Feb 13 19:03:58.819270 systemd[1]: kubepods-burstable-pod6c9a004d_6a92_4214_9c2a_ea634fe6f451.slice: Consumed 6.985s CPU time, 121.7M memory peak, 152K read from disk, 12.9M written to disk. Feb 13 19:03:59.230034 kubelet[1775]: E0213 19:03:59.229978 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:03:59.260996 systemd[1]: var-lib-kubelet-pods-6c9a004d\x2d6a92\x2d4214\x2d9c2a\x2dea634fe6f451-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr22qz.mount: Deactivated successfully. Feb 13 19:03:59.366889 kubelet[1775]: I0213 19:03:59.366842 1775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c9a004d-6a92-4214-9c2a-ea634fe6f451" path="/var/lib/kubelet/pods/6c9a004d-6a92-4214-9c2a-ea634fe6f451/volumes" Feb 13 19:04:00.230487 kubelet[1775]: E0213 19:04:00.230438 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:00.445715 kubelet[1775]: E0213 19:04:00.445660 1775 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:04:01.230851 kubelet[1775]: E0213 19:04:01.230796 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:01.559465 kubelet[1775]: I0213 19:04:01.559333 1775 topology_manager.go:215] "Topology Admit Handler" podUID="81bb20ad-15f6-415a-b1dc-c92dc042d9f7" podNamespace="kube-system" podName="cilium-kd2wl" Feb 13 19:04:01.559465 kubelet[1775]: E0213 19:04:01.559386 1775 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c9a004d-6a92-4214-9c2a-ea634fe6f451" containerName="mount-cgroup" Feb 13 19:04:01.559465 kubelet[1775]: E0213 19:04:01.559396 1775 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c9a004d-6a92-4214-9c2a-ea634fe6f451" containerName="apply-sysctl-overwrites" Feb 13 19:04:01.559465 kubelet[1775]: E0213 19:04:01.559402 1775 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c9a004d-6a92-4214-9c2a-ea634fe6f451" containerName="mount-bpf-fs" Feb 13 19:04:01.559465 kubelet[1775]: E0213 19:04:01.559408 1775 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c9a004d-6a92-4214-9c2a-ea634fe6f451" containerName="clean-cilium-state" Feb 13 19:04:01.559465 kubelet[1775]: E0213 19:04:01.559416 1775 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c9a004d-6a92-4214-9c2a-ea634fe6f451" containerName="cilium-agent" Feb 13 19:04:01.559465 kubelet[1775]: I0213 19:04:01.559434 1775 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c9a004d-6a92-4214-9c2a-ea634fe6f451" containerName="cilium-agent" Feb 13 19:04:01.559843 kubelet[1775]: I0213 19:04:01.559806 1775 topology_manager.go:215] "Topology Admit Handler" podUID="60ef4ce8-d910-44a3-8120-85f024a4983f" podNamespace="kube-system" podName="cilium-operator-599987898-xdjt4" Feb 13 19:04:01.570040 systemd[1]: Created slice kubepods-burstable-pod81bb20ad_15f6_415a_b1dc_c92dc042d9f7.slice - libcontainer container kubepods-burstable-pod81bb20ad_15f6_415a_b1dc_c92dc042d9f7.slice. Feb 13 19:04:01.583908 systemd[1]: Created slice kubepods-besteffort-pod60ef4ce8_d910_44a3_8120_85f024a4983f.slice - libcontainer container kubepods-besteffort-pod60ef4ce8_d910_44a3_8120_85f024a4983f.slice. Feb 13 19:04:01.648952 kubelet[1775]: I0213 19:04:01.648902 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-clustermesh-secrets\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.648952 kubelet[1775]: I0213 19:04:01.648949 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-cilium-config-path\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649142 kubelet[1775]: I0213 19:04:01.648970 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-hubble-tls\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649142 kubelet[1775]: I0213 19:04:01.648986 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-etc-cni-netd\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649142 kubelet[1775]: I0213 19:04:01.649030 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-host-proc-sys-net\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649142 kubelet[1775]: I0213 19:04:01.649078 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-host-proc-sys-kernel\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649142 kubelet[1775]: I0213 19:04:01.649103 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-cilium-ipsec-secrets\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649142 kubelet[1775]: I0213 19:04:01.649128 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-hostproc\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649260 kubelet[1775]: I0213 19:04:01.649148 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-cilium-cgroup\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649260 kubelet[1775]: I0213 19:04:01.649166 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-cni-path\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649260 kubelet[1775]: I0213 19:04:01.649189 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk2kk\" (UniqueName: \"kubernetes.io/projected/60ef4ce8-d910-44a3-8120-85f024a4983f-kube-api-access-fk2kk\") pod \"cilium-operator-599987898-xdjt4\" (UID: \"60ef4ce8-d910-44a3-8120-85f024a4983f\") " pod="kube-system/cilium-operator-599987898-xdjt4" Feb 13 19:04:01.649260 kubelet[1775]: I0213 19:04:01.649206 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-xtables-lock\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649260 kubelet[1775]: I0213 19:04:01.649221 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-cilium-run\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649260 kubelet[1775]: I0213 19:04:01.649238 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-bpf-maps\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649371 kubelet[1775]: I0213 19:04:01.649252 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-lib-modules\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649371 kubelet[1775]: I0213 19:04:01.649266 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67j66\" (UniqueName: \"kubernetes.io/projected/81bb20ad-15f6-415a-b1dc-c92dc042d9f7-kube-api-access-67j66\") pod \"cilium-kd2wl\" (UID: \"81bb20ad-15f6-415a-b1dc-c92dc042d9f7\") " pod="kube-system/cilium-kd2wl" Feb 13 19:04:01.649371 kubelet[1775]: I0213 19:04:01.649282 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60ef4ce8-d910-44a3-8120-85f024a4983f-cilium-config-path\") pod \"cilium-operator-599987898-xdjt4\" (UID: \"60ef4ce8-d910-44a3-8120-85f024a4983f\") " pod="kube-system/cilium-operator-599987898-xdjt4" Feb 13 19:04:01.881560 kubelet[1775]: E0213 19:04:01.881502 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:01.882159 containerd[1463]: time="2025-02-13T19:04:01.882113750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kd2wl,Uid:81bb20ad-15f6-415a-b1dc-c92dc042d9f7,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:01.886077 kubelet[1775]: E0213 19:04:01.885786 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:01.886319 containerd[1463]: time="2025-02-13T19:04:01.886284926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xdjt4,Uid:60ef4ce8-d910-44a3-8120-85f024a4983f,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:01.902489 containerd[1463]: time="2025-02-13T19:04:01.902371489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:01.902489 containerd[1463]: time="2025-02-13T19:04:01.902447932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:01.902489 containerd[1463]: time="2025-02-13T19:04:01.902464532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:01.902659 containerd[1463]: time="2025-02-13T19:04:01.902548935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:01.909176 containerd[1463]: time="2025-02-13T19:04:01.909047907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:01.909176 containerd[1463]: time="2025-02-13T19:04:01.909143870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:01.909176 containerd[1463]: time="2025-02-13T19:04:01.909155950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:01.909401 containerd[1463]: time="2025-02-13T19:04:01.909239153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:01.925527 systemd[1]: Started cri-containerd-906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6.scope - libcontainer container 906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6. Feb 13 19:04:01.930035 systemd[1]: Started cri-containerd-539837863522e8dfe1b0dba7211d47a6a71a33b420a4e3c31945aabbe0944b10.scope - libcontainer container 539837863522e8dfe1b0dba7211d47a6a71a33b420a4e3c31945aabbe0944b10. Feb 13 19:04:01.947523 containerd[1463]: time="2025-02-13T19:04:01.947476238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kd2wl,Uid:81bb20ad-15f6-415a-b1dc-c92dc042d9f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6\"" Feb 13 19:04:01.948151 kubelet[1775]: E0213 19:04:01.948126 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:01.950854 containerd[1463]: time="2025-02-13T19:04:01.950806386Z" level=info msg="CreateContainer within sandbox \"906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:04:01.962728 containerd[1463]: time="2025-02-13T19:04:01.962682853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xdjt4,Uid:60ef4ce8-d910-44a3-8120-85f024a4983f,Namespace:kube-system,Attempt:0,} returns sandbox id \"539837863522e8dfe1b0dba7211d47a6a71a33b420a4e3c31945aabbe0944b10\"" Feb 13 19:04:01.963330 kubelet[1775]: E0213 19:04:01.963272 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:01.964180 containerd[1463]: time="2025-02-13T19:04:01.964155341Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:04:01.987787 containerd[1463]: time="2025-02-13T19:04:01.987695667Z" level=info msg="CreateContainer within sandbox \"906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c568eaa44dc27b6b26892e7029fcae76c3a09d542e0f37dbb8c7263433ef1f1c\"" Feb 13 19:04:01.988456 containerd[1463]: time="2025-02-13T19:04:01.988256686Z" level=info msg="StartContainer for \"c568eaa44dc27b6b26892e7029fcae76c3a09d542e0f37dbb8c7263433ef1f1c\"" Feb 13 19:04:02.013242 systemd[1]: Started cri-containerd-c568eaa44dc27b6b26892e7029fcae76c3a09d542e0f37dbb8c7263433ef1f1c.scope - libcontainer container c568eaa44dc27b6b26892e7029fcae76c3a09d542e0f37dbb8c7263433ef1f1c. Feb 13 19:04:02.032098 containerd[1463]: time="2025-02-13T19:04:02.032024841Z" level=info msg="StartContainer for \"c568eaa44dc27b6b26892e7029fcae76c3a09d542e0f37dbb8c7263433ef1f1c\" returns successfully" Feb 13 19:04:02.107942 systemd[1]: cri-containerd-c568eaa44dc27b6b26892e7029fcae76c3a09d542e0f37dbb8c7263433ef1f1c.scope: Deactivated successfully. Feb 13 19:04:02.136995 containerd[1463]: time="2025-02-13T19:04:02.136859833Z" level=info msg="shim disconnected" id=c568eaa44dc27b6b26892e7029fcae76c3a09d542e0f37dbb8c7263433ef1f1c namespace=k8s.io Feb 13 19:04:02.136995 containerd[1463]: time="2025-02-13T19:04:02.136916395Z" level=warning msg="cleaning up after shim disconnected" id=c568eaa44dc27b6b26892e7029fcae76c3a09d542e0f37dbb8c7263433ef1f1c namespace=k8s.io Feb 13 19:04:02.136995 containerd[1463]: time="2025-02-13T19:04:02.136925235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:02.231278 kubelet[1775]: E0213 19:04:02.231232 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:02.526562 kubelet[1775]: E0213 19:04:02.526458 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:02.528462 containerd[1463]: time="2025-02-13T19:04:02.528426326Z" level=info msg="CreateContainer within sandbox \"906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:04:02.538221 containerd[1463]: time="2025-02-13T19:04:02.538168794Z" level=info msg="CreateContainer within sandbox \"906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb976efd55bf96cb4b517f73e360b2739857ed82db765214d7b75feb8c0a958c\"" Feb 13 19:04:02.538699 containerd[1463]: time="2025-02-13T19:04:02.538665409Z" level=info msg="StartContainer for \"bb976efd55bf96cb4b517f73e360b2739857ed82db765214d7b75feb8c0a958c\"" Feb 13 19:04:02.564304 systemd[1]: Started cri-containerd-bb976efd55bf96cb4b517f73e360b2739857ed82db765214d7b75feb8c0a958c.scope - libcontainer container bb976efd55bf96cb4b517f73e360b2739857ed82db765214d7b75feb8c0a958c. Feb 13 19:04:02.582756 containerd[1463]: time="2025-02-13T19:04:02.582715961Z" level=info msg="StartContainer for \"bb976efd55bf96cb4b517f73e360b2739857ed82db765214d7b75feb8c0a958c\" returns successfully" Feb 13 19:04:02.592514 systemd[1]: cri-containerd-bb976efd55bf96cb4b517f73e360b2739857ed82db765214d7b75feb8c0a958c.scope: Deactivated successfully. Feb 13 19:04:02.611266 containerd[1463]: time="2025-02-13T19:04:02.611188701Z" level=info msg="shim disconnected" id=bb976efd55bf96cb4b517f73e360b2739857ed82db765214d7b75feb8c0a958c namespace=k8s.io Feb 13 19:04:02.611266 containerd[1463]: time="2025-02-13T19:04:02.611256583Z" level=warning msg="cleaning up after shim disconnected" id=bb976efd55bf96cb4b517f73e360b2739857ed82db765214d7b75feb8c0a958c namespace=k8s.io Feb 13 19:04:02.611266 containerd[1463]: time="2025-02-13T19:04:02.611265023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:03.231701 kubelet[1775]: E0213 19:04:03.231656 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:03.506961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2735655553.mount: Deactivated successfully. Feb 13 19:04:03.529866 kubelet[1775]: E0213 19:04:03.529837 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:03.531905 containerd[1463]: time="2025-02-13T19:04:03.531868368Z" level=info msg="CreateContainer within sandbox \"906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:04:03.546762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2572098178.mount: Deactivated successfully. Feb 13 19:04:03.548837 containerd[1463]: time="2025-02-13T19:04:03.548797568Z" level=info msg="CreateContainer within sandbox \"906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"00c7c27a005c0d75cfae6110ea33e4079aa7836d482d2915f65724ab5e04d331\"" Feb 13 19:04:03.549823 containerd[1463]: time="2025-02-13T19:04:03.549784318Z" level=info msg="StartContainer for \"00c7c27a005c0d75cfae6110ea33e4079aa7836d482d2915f65724ab5e04d331\"" Feb 13 19:04:03.591234 systemd[1]: Started cri-containerd-00c7c27a005c0d75cfae6110ea33e4079aa7836d482d2915f65724ab5e04d331.scope - libcontainer container 00c7c27a005c0d75cfae6110ea33e4079aa7836d482d2915f65724ab5e04d331. Feb 13 19:04:03.617632 systemd[1]: cri-containerd-00c7c27a005c0d75cfae6110ea33e4079aa7836d482d2915f65724ab5e04d331.scope: Deactivated successfully. Feb 13 19:04:03.618990 containerd[1463]: time="2025-02-13T19:04:03.618920401Z" level=info msg="StartContainer for \"00c7c27a005c0d75cfae6110ea33e4079aa7836d482d2915f65724ab5e04d331\" returns successfully" Feb 13 19:04:03.673891 containerd[1463]: time="2025-02-13T19:04:03.673673562Z" level=info msg="shim disconnected" id=00c7c27a005c0d75cfae6110ea33e4079aa7836d482d2915f65724ab5e04d331 namespace=k8s.io Feb 13 19:04:03.673891 containerd[1463]: time="2025-02-13T19:04:03.673736884Z" level=warning msg="cleaning up after shim disconnected" id=00c7c27a005c0d75cfae6110ea33e4079aa7836d482d2915f65724ab5e04d331 namespace=k8s.io Feb 13 19:04:03.673891 containerd[1463]: time="2025-02-13T19:04:03.673746284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:03.879934 containerd[1463]: time="2025-02-13T19:04:03.879882772Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:03.881444 containerd[1463]: time="2025-02-13T19:04:03.881360417Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:04:03.882269 containerd[1463]: time="2025-02-13T19:04:03.882231324Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:03.883578 containerd[1463]: time="2025-02-13T19:04:03.883540724Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.919352342s" Feb 13 19:04:03.883578 containerd[1463]: time="2025-02-13T19:04:03.883575845Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:04:03.885807 containerd[1463]: time="2025-02-13T19:04:03.885779233Z" level=info msg="CreateContainer within sandbox \"539837863522e8dfe1b0dba7211d47a6a71a33b420a4e3c31945aabbe0944b10\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:04:03.899270 containerd[1463]: time="2025-02-13T19:04:03.899230606Z" level=info msg="CreateContainer within sandbox \"539837863522e8dfe1b0dba7211d47a6a71a33b420a4e3c31945aabbe0944b10\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e47007eb3548e796a5e5be13d355180ff84a0409b30b6dadd8df77ea56a1f3be\"" Feb 13 19:04:03.900106 containerd[1463]: time="2025-02-13T19:04:03.899973549Z" level=info msg="StartContainer for \"e47007eb3548e796a5e5be13d355180ff84a0409b30b6dadd8df77ea56a1f3be\"" Feb 13 19:04:03.929253 systemd[1]: Started cri-containerd-e47007eb3548e796a5e5be13d355180ff84a0409b30b6dadd8df77ea56a1f3be.scope - libcontainer container e47007eb3548e796a5e5be13d355180ff84a0409b30b6dadd8df77ea56a1f3be. Feb 13 19:04:03.952169 containerd[1463]: time="2025-02-13T19:04:03.952125790Z" level=info msg="StartContainer for \"e47007eb3548e796a5e5be13d355180ff84a0409b30b6dadd8df77ea56a1f3be\" returns successfully" Feb 13 19:04:04.232686 kubelet[1775]: E0213 19:04:04.232554 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:04.532265 kubelet[1775]: E0213 19:04:04.532142 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:04.535164 kubelet[1775]: E0213 19:04:04.535073 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:04.537081 containerd[1463]: time="2025-02-13T19:04:04.537037934Z" level=info msg="CreateContainer within sandbox \"906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:04:04.554751 containerd[1463]: time="2025-02-13T19:04:04.554698422Z" level=info msg="CreateContainer within sandbox \"906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"229df9f1b39398a98cd6c86b751373427494955a63f0268b859b3be82e258153\"" Feb 13 19:04:04.556165 containerd[1463]: time="2025-02-13T19:04:04.555257798Z" level=info msg="StartContainer for \"229df9f1b39398a98cd6c86b751373427494955a63f0268b859b3be82e258153\"" Feb 13 19:04:04.559704 kubelet[1775]: I0213 19:04:04.559644 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xdjt4" podStartSLOduration=1.6391977930000001 podStartE2EDuration="3.559629129s" podCreationTimestamp="2025-02-13 19:04:01 +0000 UTC" firstStartedPulling="2025-02-13 19:04:01.963896732 +0000 UTC m=+57.767740124" lastFinishedPulling="2025-02-13 19:04:03.884328108 +0000 UTC m=+59.688171460" observedRunningTime="2025-02-13 19:04:04.540637202 +0000 UTC m=+60.344480594" watchObservedRunningTime="2025-02-13 19:04:04.559629129 +0000 UTC m=+60.363472481" Feb 13 19:04:04.584255 systemd[1]: Started cri-containerd-229df9f1b39398a98cd6c86b751373427494955a63f0268b859b3be82e258153.scope - libcontainer container 229df9f1b39398a98cd6c86b751373427494955a63f0268b859b3be82e258153. Feb 13 19:04:04.603459 systemd[1]: cri-containerd-229df9f1b39398a98cd6c86b751373427494955a63f0268b859b3be82e258153.scope: Deactivated successfully. Feb 13 19:04:04.613736 containerd[1463]: time="2025-02-13T19:04:04.613651622Z" level=info msg="StartContainer for \"229df9f1b39398a98cd6c86b751373427494955a63f0268b859b3be82e258153\" returns successfully" Feb 13 19:04:04.688816 containerd[1463]: time="2025-02-13T19:04:04.688746264Z" level=info msg="shim disconnected" id=229df9f1b39398a98cd6c86b751373427494955a63f0268b859b3be82e258153 namespace=k8s.io Feb 13 19:04:04.688816 containerd[1463]: time="2025-02-13T19:04:04.688810585Z" level=warning msg="cleaning up after shim disconnected" id=229df9f1b39398a98cd6c86b751373427494955a63f0268b859b3be82e258153 namespace=k8s.io Feb 13 19:04:04.688816 containerd[1463]: time="2025-02-13T19:04:04.688820546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:04.753797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320674158.mount: Deactivated successfully. Feb 13 19:04:05.193562 kubelet[1775]: E0213 19:04:05.193516 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:05.220555 containerd[1463]: time="2025-02-13T19:04:05.220501008Z" level=info msg="StopPodSandbox for \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\"" Feb 13 19:04:05.220692 containerd[1463]: time="2025-02-13T19:04:05.220592731Z" level=info msg="TearDown network for sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" successfully" Feb 13 19:04:05.220692 containerd[1463]: time="2025-02-13T19:04:05.220605291Z" level=info msg="StopPodSandbox for \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" returns successfully" Feb 13 19:04:05.221434 containerd[1463]: time="2025-02-13T19:04:05.221386114Z" level=info msg="RemovePodSandbox for \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\"" Feb 13 19:04:05.221434 containerd[1463]: time="2025-02-13T19:04:05.221423275Z" level=info msg="Forcibly stopping sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\"" Feb 13 19:04:05.221582 containerd[1463]: time="2025-02-13T19:04:05.221479437Z" level=info msg="TearDown network for sandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" successfully" Feb 13 19:04:05.227863 containerd[1463]: time="2025-02-13T19:04:05.227797420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:05.227863 containerd[1463]: time="2025-02-13T19:04:05.227847582Z" level=info msg="RemovePodSandbox \"7a957892eabb17bfa478253f451551f54c3b004283a0103fce40bce6fffd3aad\" returns successfully" Feb 13 19:04:05.232843 kubelet[1775]: E0213 19:04:05.232749 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:05.447317 kubelet[1775]: E0213 19:04:05.447213 1775 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:04:05.539801 kubelet[1775]: E0213 19:04:05.539467 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:05.557191 kubelet[1775]: E0213 19:04:05.557166 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:05.559397 containerd[1463]: time="2025-02-13T19:04:05.559272655Z" level=info msg="CreateContainer within sandbox \"906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:04:05.577054 containerd[1463]: time="2025-02-13T19:04:05.577007890Z" level=info msg="CreateContainer within sandbox \"906b7701729de9ba3e06a5ef4b3132fe6ce4746fb7d8c1b7e719da70837bd7d6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a0038fe7e6f4f0da175e9b5605b6a262436c466e96cd74eb123f4a0ad1e9b111\"" Feb 13 19:04:05.577528 containerd[1463]: time="2025-02-13T19:04:05.577490704Z" level=info msg="StartContainer for \"a0038fe7e6f4f0da175e9b5605b6a262436c466e96cd74eb123f4a0ad1e9b111\"" Feb 13 19:04:05.619256 systemd[1]: Started cri-containerd-a0038fe7e6f4f0da175e9b5605b6a262436c466e96cd74eb123f4a0ad1e9b111.scope - libcontainer container a0038fe7e6f4f0da175e9b5605b6a262436c466e96cd74eb123f4a0ad1e9b111. Feb 13 19:04:05.659837 containerd[1463]: time="2025-02-13T19:04:05.659797376Z" level=info msg="StartContainer for \"a0038fe7e6f4f0da175e9b5605b6a262436c466e96cd74eb123f4a0ad1e9b111\" returns successfully" Feb 13 19:04:05.970169 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:04:06.233477 kubelet[1775]: E0213 19:04:06.233343 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:06.544990 kubelet[1775]: E0213 19:04:06.544887 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:06.559076 kubelet[1775]: I0213 19:04:06.559010 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kd2wl" podStartSLOduration=5.558995139 podStartE2EDuration="5.558995139s" podCreationTimestamp="2025-02-13 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:06.557328331 +0000 UTC m=+62.361171723" watchObservedRunningTime="2025-02-13 19:04:06.558995139 +0000 UTC m=+62.362838531" Feb 13 19:04:07.073135 kubelet[1775]: I0213 19:04:07.073086 1775 setters.go:580] "Node became not ready" node="10.0.0.44" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:04:07Z","lastTransitionTime":"2025-02-13T19:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:04:07.234258 kubelet[1775]: E0213 19:04:07.234222 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:07.883274 kubelet[1775]: E0213 19:04:07.883190 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:08.235147 kubelet[1775]: E0213 19:04:08.235041 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:08.849145 systemd-networkd[1398]: lxc_health: Link UP Feb 13 19:04:08.858680 systemd-networkd[1398]: lxc_health: Gained carrier Feb 13 19:04:09.236115 kubelet[1775]: E0213 19:04:09.235964 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:09.883503 kubelet[1775]: E0213 19:04:09.883461 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:10.240743 kubelet[1775]: E0213 19:04:10.240578 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:10.264453 systemd[1]: run-containerd-runc-k8s.io-a0038fe7e6f4f0da175e9b5605b6a262436c466e96cd74eb123f4a0ad1e9b111-runc.PEzBKf.mount: Deactivated successfully. Feb 13 19:04:10.533525 systemd-networkd[1398]: lxc_health: Gained IPv6LL Feb 13 19:04:10.551596 kubelet[1775]: E0213 19:04:10.551561 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:11.240770 kubelet[1775]: E0213 19:04:11.240715 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:11.553897 kubelet[1775]: E0213 19:04:11.553776 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:04:12.241254 kubelet[1775]: E0213 19:04:12.241199 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:13.241827 kubelet[1775]: E0213 19:04:13.241785 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:14.242888 kubelet[1775]: E0213 19:04:14.242839 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:15.243370 kubelet[1775]: E0213 19:04:15.243323 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:04:16.244163 kubelet[1775]: E0213 19:04:16.244117 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"