May 13 00:03:38.910655 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:03:38.910678 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon May 12 22:21:23 -00 2025 May 13 00:03:38.910689 kernel: KASLR enabled May 13 00:03:38.910694 kernel: efi: EFI v2.7 by EDK II May 13 00:03:38.910700 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 13 00:03:38.910706 kernel: random: crng init done May 13 00:03:38.910713 kernel: secureboot: Secure boot disabled May 13 00:03:38.910720 kernel: ACPI: Early table checksum verification disabled May 13 00:03:38.910726 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 13 00:03:38.910733 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:03:38.910739 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:38.910745 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:38.910751 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:38.910757 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:38.910765 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:38.910773 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:38.910780 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:38.910787 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:38.910793 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:38.910800 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:03:38.910807 kernel: NUMA: Failed to initialise from firmware May 13 00:03:38.910813 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:03:38.910820 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 13 00:03:38.910826 kernel: Zone ranges: May 13 00:03:38.910832 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:03:38.910840 kernel: DMA32 empty May 13 00:03:38.910847 kernel: Normal empty May 13 00:03:38.910853 kernel: Movable zone start for each node May 13 00:03:38.910860 kernel: Early memory node ranges May 13 00:03:38.910866 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 13 00:03:38.910873 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 00:03:38.910879 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 00:03:38.910886 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 00:03:38.910892 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 00:03:38.910899 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 00:03:38.910905 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 00:03:38.910912 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:03:38.910920 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:03:38.910926 kernel: psci: probing for conduit method from ACPI. May 13 00:03:38.910932 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:03:38.910942 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:03:38.910948 kernel: psci: Trusted OS migration not required May 13 00:03:38.910955 kernel: psci: SMC Calling Convention v1.1 May 13 00:03:38.910963 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:03:38.910970 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 00:03:38.910976 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 00:03:38.910983 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:03:38.910990 kernel: Detected PIPT I-cache on CPU0 May 13 00:03:38.910997 kernel: CPU features: detected: GIC system register CPU interface May 13 00:03:38.911003 kernel: CPU features: detected: Hardware dirty bit management May 13 00:03:38.911010 kernel: CPU features: detected: Spectre-v4 May 13 00:03:38.911017 kernel: CPU features: detected: Spectre-BHB May 13 00:03:38.911023 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:03:38.911031 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:03:38.911038 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:03:38.911045 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:03:38.911051 kernel: alternatives: applying boot alternatives May 13 00:03:38.911059 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e3fb02dca379a9c7f05d94ae800dbbcafb80c81ea68c8486d0613b136c5c38d4 May 13 00:03:38.911066 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:03:38.911072 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:03:38.911079 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:03:38.911086 kernel: Fallback order for Node 0: 0 May 13 00:03:38.911110 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:03:38.911118 kernel: Policy zone: DMA May 13 00:03:38.911127 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:03:38.911134 kernel: software IO TLB: area num 4. May 13 00:03:38.911141 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 00:03:38.911148 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) May 13 00:03:38.911155 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:03:38.911162 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:03:38.911169 kernel: rcu: RCU event tracing is enabled. May 13 00:03:38.911176 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:03:38.911183 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:03:38.911190 kernel: Tracing variant of Tasks RCU enabled. May 13 00:03:38.911196 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:03:38.911203 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:03:38.911211 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:03:38.911218 kernel: GICv3: 256 SPIs implemented May 13 00:03:38.911224 kernel: GICv3: 0 Extended SPIs implemented May 13 00:03:38.911231 kernel: Root IRQ handler: gic_handle_irq May 13 00:03:38.911237 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 00:03:38.911244 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:03:38.911251 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:03:38.911257 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:03:38.911264 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 00:03:38.911271 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 00:03:38.911277 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 00:03:38.911285 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:03:38.911292 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:03:38.911299 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:03:38.911306 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:03:38.911313 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:03:38.911320 kernel: arm-pv: using stolen time PV May 13 00:03:38.911327 kernel: Console: colour dummy device 80x25 May 13 00:03:38.911334 kernel: ACPI: Core revision 20230628 May 13 00:03:38.911341 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:03:38.911349 kernel: pid_max: default: 32768 minimum: 301 May 13 00:03:38.911357 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:03:38.911364 kernel: landlock: Up and running. May 13 00:03:38.911371 kernel: SELinux: Initializing. May 13 00:03:38.911379 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:03:38.911386 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:03:38.911398 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:03:38.911409 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:03:38.911416 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:03:38.911423 kernel: rcu: Hierarchical SRCU implementation. May 13 00:03:38.911432 kernel: rcu: Max phase no-delay instances is 400. May 13 00:03:38.911439 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:03:38.911446 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:03:38.911453 kernel: Remapping and enabling EFI services. May 13 00:03:38.911461 kernel: smp: Bringing up secondary CPUs ... May 13 00:03:38.911468 kernel: Detected PIPT I-cache on CPU1 May 13 00:03:38.911475 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:03:38.911482 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 00:03:38.911489 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:03:38.911496 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:03:38.911506 kernel: Detected PIPT I-cache on CPU2 May 13 00:03:38.911513 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:03:38.911525 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 00:03:38.911534 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:03:38.911542 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:03:38.911549 kernel: Detected PIPT I-cache on CPU3 May 13 00:03:38.911557 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:03:38.911564 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 00:03:38.911572 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:03:38.911579 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:03:38.911588 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:03:38.911596 kernel: SMP: Total of 4 processors activated. May 13 00:03:38.911604 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:03:38.911611 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:03:38.911618 kernel: CPU features: detected: Common not Private translations May 13 00:03:38.911633 kernel: CPU features: detected: CRC32 instructions May 13 00:03:38.911640 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 00:03:38.911650 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:03:38.911657 kernel: CPU features: detected: LSE atomic instructions May 13 00:03:38.911664 kernel: CPU features: detected: Privileged Access Never May 13 00:03:38.911672 kernel: CPU features: detected: RAS Extension Support May 13 00:03:38.911680 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:03:38.911687 kernel: CPU: All CPU(s) started at EL1 May 13 00:03:38.911695 kernel: alternatives: applying system-wide alternatives May 13 00:03:38.911702 kernel: devtmpfs: initialized May 13 00:03:38.911710 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:03:38.911719 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:03:38.911726 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:03:38.911734 kernel: SMBIOS 3.0.0 present. May 13 00:03:38.911742 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 00:03:38.911750 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:03:38.911757 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:03:38.911765 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:03:38.911773 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:03:38.911780 kernel: audit: initializing netlink subsys (disabled) May 13 00:03:38.911789 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 13 00:03:38.911796 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:03:38.911804 kernel: cpuidle: using governor menu May 13 00:03:38.911812 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:03:38.911819 kernel: ASID allocator initialised with 32768 entries May 13 00:03:38.911826 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:03:38.911834 kernel: Serial: AMBA PL011 UART driver May 13 00:03:38.911841 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 00:03:38.911848 kernel: Modules: 0 pages in range for non-PLT usage May 13 00:03:38.911857 kernel: Modules: 508944 pages in range for PLT usage May 13 00:03:38.911865 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:03:38.911872 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:03:38.911879 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:03:38.911887 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 00:03:38.911894 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:03:38.911901 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:03:38.911908 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:03:38.911916 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 00:03:38.911925 kernel: ACPI: Added _OSI(Module Device) May 13 00:03:38.911932 kernel: ACPI: Added _OSI(Processor Device) May 13 00:03:38.911939 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:03:38.911947 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:03:38.911954 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:03:38.911961 kernel: ACPI: Interpreter enabled May 13 00:03:38.911969 kernel: ACPI: Using GIC for interrupt routing May 13 00:03:38.911976 kernel: ACPI: MCFG table detected, 1 entries May 13 00:03:38.911984 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:03:38.911992 kernel: printk: console [ttyAMA0] enabled May 13 00:03:38.912000 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:03:38.912226 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:03:38.912307 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:03:38.912374 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:03:38.912438 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:03:38.912504 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:03:38.912517 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:03:38.912524 kernel: PCI host bridge to bus 0000:00 May 13 00:03:38.912597 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:03:38.912668 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:03:38.912734 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:03:38.912806 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:03:38.912893 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:03:38.912975 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:03:38.913044 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:03:38.913125 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:03:38.913194 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:03:38.913262 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:03:38.913328 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:03:38.913394 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:03:38.913455 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:03:38.913512 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:03:38.913572 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:03:38.913581 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:03:38.913589 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:03:38.913596 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:03:38.913604 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:03:38.913611 kernel: iommu: Default domain type: Translated May 13 00:03:38.913621 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:03:38.913633 kernel: efivars: Registered efivars operations May 13 00:03:38.913641 kernel: vgaarb: loaded May 13 00:03:38.913648 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:03:38.913655 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:03:38.913663 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:03:38.913670 kernel: pnp: PnP ACPI init May 13 00:03:38.913747 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:03:38.913760 kernel: pnp: PnP ACPI: found 1 devices May 13 00:03:38.913767 kernel: NET: Registered PF_INET protocol family May 13 00:03:38.913774 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:03:38.913782 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:03:38.913789 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:03:38.913796 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:03:38.913803 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:03:38.913811 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:03:38.913818 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:03:38.913826 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:03:38.913834 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:03:38.913841 kernel: PCI: CLS 0 bytes, default 64 May 13 00:03:38.913848 kernel: kvm [1]: HYP mode not available May 13 00:03:38.913856 kernel: Initialise system trusted keyrings May 13 00:03:38.913863 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:03:38.913870 kernel: Key type asymmetric registered May 13 00:03:38.913877 kernel: Asymmetric key parser 'x509' registered May 13 00:03:38.913884 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 00:03:38.913894 kernel: io scheduler mq-deadline registered May 13 00:03:38.913901 kernel: io scheduler kyber registered May 13 00:03:38.913908 kernel: io scheduler bfq registered May 13 00:03:38.913916 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:03:38.913923 kernel: ACPI: button: Power Button [PWRB] May 13 00:03:38.913931 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:03:38.913997 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:03:38.914009 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:03:38.914017 kernel: thunder_xcv, ver 1.0 May 13 00:03:38.914026 kernel: thunder_bgx, ver 1.0 May 13 00:03:38.914033 kernel: nicpf, ver 1.0 May 13 00:03:38.914040 kernel: nicvf, ver 1.0 May 13 00:03:38.914163 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:03:38.914228 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:03:38 UTC (1747094618) May 13 00:03:38.914238 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:03:38.914245 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:03:38.914253 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 00:03:38.914264 kernel: watchdog: Hard watchdog permanently disabled May 13 00:03:38.914271 kernel: NET: Registered PF_INET6 protocol family May 13 00:03:38.914279 kernel: Segment Routing with IPv6 May 13 00:03:38.914286 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:03:38.914294 kernel: NET: Registered PF_PACKET protocol family May 13 00:03:38.914301 kernel: Key type dns_resolver registered May 13 00:03:38.914308 kernel: registered taskstats version 1 May 13 00:03:38.914315 kernel: Loading compiled-in X.509 certificates May 13 00:03:38.914323 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: f172f0fb4eac06c214e4b9ce0f39d6c4075ccc9a' May 13 00:03:38.914332 kernel: Key type .fscrypt registered May 13 00:03:38.914339 kernel: Key type fscrypt-provisioning registered May 13 00:03:38.914347 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:03:38.914354 kernel: ima: Allocated hash algorithm: sha1 May 13 00:03:38.914362 kernel: ima: No architecture policies found May 13 00:03:38.914369 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:03:38.914376 kernel: clk: Disabling unused clocks May 13 00:03:38.914384 kernel: Freeing unused kernel memory: 39744K May 13 00:03:38.914391 kernel: Run /init as init process May 13 00:03:38.914400 kernel: with arguments: May 13 00:03:38.914407 kernel: /init May 13 00:03:38.914414 kernel: with environment: May 13 00:03:38.914421 kernel: HOME=/ May 13 00:03:38.914428 kernel: TERM=linux May 13 00:03:38.914435 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:03:38.914444 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:03:38.914453 systemd[1]: Detected virtualization kvm. May 13 00:03:38.914463 systemd[1]: Detected architecture arm64. May 13 00:03:38.914470 systemd[1]: Running in initrd. May 13 00:03:38.914478 systemd[1]: No hostname configured, using default hostname. May 13 00:03:38.914486 systemd[1]: Hostname set to . May 13 00:03:38.914493 systemd[1]: Initializing machine ID from VM UUID. May 13 00:03:38.914501 systemd[1]: Queued start job for default target initrd.target. May 13 00:03:38.914509 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:03:38.914517 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:03:38.914527 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:03:38.914535 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:03:38.914543 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:03:38.914551 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:03:38.914560 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:03:38.914568 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:03:38.914577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:03:38.914585 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:03:38.914593 systemd[1]: Reached target paths.target - Path Units. May 13 00:03:38.914600 systemd[1]: Reached target slices.target - Slice Units. May 13 00:03:38.914622 systemd[1]: Reached target swap.target - Swaps. May 13 00:03:38.914637 systemd[1]: Reached target timers.target - Timer Units. May 13 00:03:38.914644 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:03:38.914652 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:03:38.914660 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:03:38.914671 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:03:38.914679 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:03:38.914686 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:03:38.914694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:03:38.914702 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:03:38.914710 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:03:38.914717 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:03:38.914725 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:03:38.914733 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:03:38.914743 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:03:38.914752 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:03:38.914760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:03:38.914768 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:03:38.914776 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:03:38.914784 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:03:38.914793 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:03:38.914823 systemd-journald[238]: Collecting audit messages is disabled. May 13 00:03:38.914845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:03:38.914853 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:03:38.914862 systemd-journald[238]: Journal started May 13 00:03:38.914880 systemd-journald[238]: Runtime Journal (/run/log/journal/f18d560c62b240ae869b595124014e4f) is 5.9M, max 47.3M, 41.4M free. May 13 00:03:38.905762 systemd-modules-load[240]: Inserted module 'overlay' May 13 00:03:38.918732 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:03:38.923113 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:03:38.924474 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:03:38.927499 systemd-modules-load[240]: Inserted module 'br_netfilter' May 13 00:03:38.928289 kernel: Bridge firewalling registered May 13 00:03:38.928507 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:03:38.931244 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:03:38.934587 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:03:38.939481 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:03:38.941938 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:03:38.945180 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:03:38.947857 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:03:38.950002 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:03:38.954416 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:03:38.957717 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:03:38.964344 dracut-cmdline[273]: dracut-dracut-053 May 13 00:03:38.972795 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e3fb02dca379a9c7f05d94ae800dbbcafb80c81ea68c8486d0613b136c5c38d4 May 13 00:03:39.002667 systemd-resolved[278]: Positive Trust Anchors: May 13 00:03:39.002784 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:03:39.002816 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:03:39.007957 systemd-resolved[278]: Defaulting to hostname 'linux'. May 13 00:03:39.009043 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:03:39.010852 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:03:39.047131 kernel: SCSI subsystem initialized May 13 00:03:39.052112 kernel: Loading iSCSI transport class v2.0-870. May 13 00:03:39.060113 kernel: iscsi: registered transport (tcp) May 13 00:03:39.076285 kernel: iscsi: registered transport (qla4xxx) May 13 00:03:39.076322 kernel: QLogic iSCSI HBA Driver May 13 00:03:39.125511 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:03:39.138317 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:03:39.156162 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:03:39.156238 kernel: device-mapper: uevent: version 1.0.3 May 13 00:03:39.156249 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:03:39.215121 kernel: raid6: neonx8 gen() 15757 MB/s May 13 00:03:39.232122 kernel: raid6: neonx4 gen() 15657 MB/s May 13 00:03:39.249130 kernel: raid6: neonx2 gen() 12999 MB/s May 13 00:03:39.266146 kernel: raid6: neonx1 gen() 10475 MB/s May 13 00:03:39.283128 kernel: raid6: int64x8 gen() 6960 MB/s May 13 00:03:39.300131 kernel: raid6: int64x4 gen() 7343 MB/s May 13 00:03:39.317131 kernel: raid6: int64x2 gen() 6130 MB/s May 13 00:03:39.334111 kernel: raid6: int64x1 gen() 5049 MB/s May 13 00:03:39.334136 kernel: raid6: using algorithm neonx8 gen() 15757 MB/s May 13 00:03:39.351113 kernel: raid6: .... xor() 11734 MB/s, rmw enabled May 13 00:03:39.351127 kernel: raid6: using neon recovery algorithm May 13 00:03:39.356116 kernel: xor: measuring software checksum speed May 13 00:03:39.356152 kernel: 8regs : 18436 MB/sec May 13 00:03:39.357108 kernel: 32regs : 19276 MB/sec May 13 00:03:39.358139 kernel: arm64_neon : 25520 MB/sec May 13 00:03:39.358160 kernel: xor: using function: arm64_neon (25520 MB/sec) May 13 00:03:39.418146 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:03:39.443055 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:03:39.453340 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:03:39.469481 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 13 00:03:39.473195 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:03:39.483014 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:03:39.504200 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation May 13 00:03:39.538997 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:03:39.553296 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:03:39.597724 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:03:39.605316 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:03:39.620381 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:03:39.621997 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:03:39.626330 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:03:39.628294 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:03:39.639328 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:03:39.652669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:03:39.655190 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 00:03:39.655344 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:03:39.652793 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:03:39.658721 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:03:39.658737 kernel: GPT:9289727 != 19775487 May 13 00:03:39.658746 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:03:39.658755 kernel: GPT:9289727 != 19775487 May 13 00:03:39.658766 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:03:39.658776 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:03:39.659991 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:03:39.661243 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:03:39.661405 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:03:39.663807 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:03:39.672662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:03:39.676205 kernel: BTRFS: device fsid 8bc7e2dd-1c9f-4f38-9a4f-4a4a9806cb3a devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (518) May 13 00:03:39.674069 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:03:39.680114 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (519) May 13 00:03:39.689140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:03:39.696611 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:03:39.700944 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:03:39.707055 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:03:39.708893 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:03:39.713055 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:03:39.725274 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:03:39.727293 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:03:39.749837 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:03:39.777002 disk-uuid[550]: Primary Header is updated. May 13 00:03:39.777002 disk-uuid[550]: Secondary Entries is updated. May 13 00:03:39.777002 disk-uuid[550]: Secondary Header is updated. May 13 00:03:39.779472 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:03:39.789113 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:03:40.791121 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:03:40.791700 disk-uuid[559]: The operation has completed successfully. May 13 00:03:40.818968 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:03:40.819701 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:03:40.853337 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:03:40.856254 sh[573]: Success May 13 00:03:40.871119 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:03:40.912442 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:03:40.913895 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:03:40.914727 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:03:40.925829 kernel: BTRFS info (device dm-0): first mount of filesystem 8bc7e2dd-1c9f-4f38-9a4f-4a4a9806cb3a May 13 00:03:40.925878 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 00:03:40.925889 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:03:40.927453 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:03:40.927480 kernel: BTRFS info (device dm-0): using free space tree May 13 00:03:40.931935 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:03:40.933437 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:03:40.947283 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:03:40.949663 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:03:40.956627 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 13 00:03:40.956679 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:03:40.956690 kernel: BTRFS info (device vda6): using free space tree May 13 00:03:40.959184 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:03:40.966778 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:03:40.967795 kernel: BTRFS info (device vda6): last unmount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 13 00:03:40.975194 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:03:40.982381 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:03:41.051593 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:03:41.063293 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:03:41.089349 ignition[666]: Ignition 2.20.0 May 13 00:03:41.089359 ignition[666]: Stage: fetch-offline May 13 00:03:41.089398 ignition[666]: no configs at "/usr/lib/ignition/base.d" May 13 00:03:41.089406 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:41.089568 ignition[666]: parsed url from cmdline: "" May 13 00:03:41.089572 ignition[666]: no config URL provided May 13 00:03:41.089577 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:03:41.089583 ignition[666]: no config at "/usr/lib/ignition/user.ign" May 13 00:03:41.089612 ignition[666]: op(1): [started] loading QEMU firmware config module May 13 00:03:41.089623 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:03:41.096784 systemd-networkd[766]: lo: Link UP May 13 00:03:41.096788 systemd-networkd[766]: lo: Gained carrier May 13 00:03:41.098624 systemd-networkd[766]: Enumeration completed May 13 00:03:41.099393 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:03:41.099899 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:03:41.099902 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:03:41.100660 systemd-networkd[766]: eth0: Link UP May 13 00:03:41.100663 systemd-networkd[766]: eth0: Gained carrier May 13 00:03:41.100669 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:03:41.102172 systemd[1]: Reached target network.target - Network. May 13 00:03:41.109615 ignition[666]: op(1): [finished] loading QEMU firmware config module May 13 00:03:41.116440 ignition[666]: parsing config with SHA512: fcc490968acec2dd1445d08e921532d54ae2ea1e64e0b1197c78f4dda4ef9f08404005a52be9afc7edfe5ee06d311bc3111d8ee9883f3630e5cb2446b0d8da93 May 13 00:03:41.120141 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:03:41.120420 unknown[666]: fetched base config from "system" May 13 00:03:41.120702 ignition[666]: fetch-offline: fetch-offline passed May 13 00:03:41.120427 unknown[666]: fetched user config from "qemu" May 13 00:03:41.120773 ignition[666]: Ignition finished successfully May 13 00:03:41.122014 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:03:41.124002 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:03:41.131234 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:03:41.143528 ignition[772]: Ignition 2.20.0 May 13 00:03:41.143539 ignition[772]: Stage: kargs May 13 00:03:41.143705 ignition[772]: no configs at "/usr/lib/ignition/base.d" May 13 00:03:41.143716 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:41.144371 ignition[772]: kargs: kargs passed May 13 00:03:41.144413 ignition[772]: Ignition finished successfully May 13 00:03:41.147756 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:03:41.160286 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:03:41.169840 ignition[781]: Ignition 2.20.0 May 13 00:03:41.169855 ignition[781]: Stage: disks May 13 00:03:41.170035 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 13 00:03:41.170045 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:41.170727 ignition[781]: disks: disks passed May 13 00:03:41.172295 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:03:41.170771 ignition[781]: Ignition finished successfully May 13 00:03:41.173277 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:03:41.174236 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:03:41.175802 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:03:41.176876 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:03:41.178318 systemd[1]: Reached target basic.target - Basic System. May 13 00:03:41.190267 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:03:41.200101 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:03:41.203942 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:03:41.206965 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:03:41.251114 kernel: EXT4-fs (vda9): mounted filesystem 267e1a87-2243-4e28-a518-ba9876b017ec r/w with ordered data mode. Quota mode: none. May 13 00:03:41.251142 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:03:41.252209 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:03:41.259186 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:03:41.260717 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:03:41.262719 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:03:41.262792 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:03:41.262819 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:03:41.267879 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:03:41.270114 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) May 13 00:03:41.270137 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 13 00:03:41.270147 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:03:41.270156 kernel: BTRFS info (device vda6): using free space tree May 13 00:03:41.272842 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:03:41.274690 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:03:41.274598 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:03:41.329937 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:03:41.334057 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 13 00:03:41.337559 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:03:41.340590 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:03:41.430743 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:03:41.446243 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:03:41.447798 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:03:41.453112 kernel: BTRFS info (device vda6): last unmount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 13 00:03:41.466730 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:03:41.475260 ignition[914]: INFO : Ignition 2.20.0 May 13 00:03:41.475260 ignition[914]: INFO : Stage: mount May 13 00:03:41.476522 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:03:41.476522 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:41.476522 ignition[914]: INFO : mount: mount passed May 13 00:03:41.478558 ignition[914]: INFO : Ignition finished successfully May 13 00:03:41.479032 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:03:41.494355 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:03:41.924908 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:03:41.938343 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:03:41.944330 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) May 13 00:03:41.944363 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 13 00:03:41.944374 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:03:41.945467 kernel: BTRFS info (device vda6): using free space tree May 13 00:03:41.947106 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:03:41.948309 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:03:41.962273 ignition[946]: INFO : Ignition 2.20.0 May 13 00:03:41.962273 ignition[946]: INFO : Stage: files May 13 00:03:41.963523 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:03:41.963523 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:41.963523 ignition[946]: DEBUG : files: compiled without relabeling support, skipping May 13 00:03:41.965969 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:03:41.965969 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:03:41.965969 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:03:41.965969 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:03:41.970009 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:03:41.970009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 13 00:03:41.970009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:03:41.970009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:03:41.970009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:03:41.970009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:03:41.970009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:03:41.970009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:03:41.970009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 13 00:03:41.966001 unknown[946]: wrote ssh authorized keys file for user: core May 13 00:03:42.267131 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 13 00:03:42.393221 systemd-networkd[766]: eth0: Gained IPv6LL May 13 00:03:42.646529 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:03:42.646529 ignition[946]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 13 00:03:42.649305 ignition[946]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:03:42.649305 ignition[946]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:03:42.649305 ignition[946]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 13 00:03:42.649305 ignition[946]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:03:42.676926 ignition[946]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:03:42.680527 ignition[946]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:03:42.681901 ignition[946]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:03:42.681901 ignition[946]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:03:42.681901 ignition[946]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:03:42.681901 ignition[946]: INFO : files: files passed May 13 00:03:42.681901 ignition[946]: INFO : Ignition finished successfully May 13 00:03:42.682961 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:03:42.691354 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:03:42.692881 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:03:42.696788 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:03:42.697587 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:03:42.700443 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:03:42.703384 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:03:42.703384 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:03:42.706025 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:03:42.707995 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:03:42.710476 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:03:42.723325 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:03:42.743582 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:03:42.743721 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:03:42.745685 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:03:42.746744 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:03:42.748490 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:03:42.749225 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:03:42.764433 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:03:42.766868 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:03:42.777823 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:03:42.779075 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:03:42.780895 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:03:42.782385 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:03:42.782507 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:03:42.784632 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:03:42.786337 systemd[1]: Stopped target basic.target - Basic System. May 13 00:03:42.787751 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:03:42.789304 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:03:42.790982 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:03:42.792677 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:03:42.794233 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:03:42.795948 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:03:42.797604 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:03:42.799161 systemd[1]: Stopped target swap.target - Swaps. May 13 00:03:42.800485 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:03:42.800624 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:03:42.802562 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:03:42.804231 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:03:42.805877 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:03:42.809165 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:03:42.810358 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:03:42.810471 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:03:42.812726 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:03:42.812839 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:03:42.814426 systemd[1]: Stopped target paths.target - Path Units. May 13 00:03:42.815501 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:03:42.819702 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:03:42.821691 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:03:42.822480 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:03:42.825115 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:03:42.825220 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:03:42.826277 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:03:42.826362 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:03:42.827525 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:03:42.827655 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:03:42.828898 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:03:42.829000 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:03:42.844342 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:03:42.846417 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:03:42.847061 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:03:42.847201 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:03:42.848553 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:03:42.848665 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:03:42.853126 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:03:42.854975 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:03:42.857675 ignition[1000]: INFO : Ignition 2.20.0 May 13 00:03:42.857675 ignition[1000]: INFO : Stage: umount May 13 00:03:42.859121 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:03:42.859121 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:42.859121 ignition[1000]: INFO : umount: umount passed May 13 00:03:42.859121 ignition[1000]: INFO : Ignition finished successfully May 13 00:03:42.860419 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:03:42.860941 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:03:42.861043 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:03:42.862663 systemd[1]: Stopped target network.target - Network. May 13 00:03:42.863636 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:03:42.863713 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:03:42.865190 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:03:42.865231 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:03:42.866514 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:03:42.866552 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:03:42.868023 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:03:42.868067 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:03:42.870053 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:03:42.871464 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:03:42.878470 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:03:42.878603 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:03:42.879137 systemd-networkd[766]: eth0: DHCPv6 lease lost May 13 00:03:42.881278 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:03:42.881397 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:03:42.883302 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:03:42.883360 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:03:42.899273 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:03:42.899964 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:03:42.900032 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:03:42.901487 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:03:42.901529 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:03:42.902889 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:03:42.902932 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:03:42.904574 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:03:42.904626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:03:42.906141 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:03:42.915977 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:03:42.916957 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:03:42.927932 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:03:42.928122 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:03:42.929984 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:03:42.930026 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:03:42.931229 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:03:42.931258 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:03:42.932535 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:03:42.932578 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:03:42.934496 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:03:42.934541 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:03:42.936445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:03:42.936489 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:03:42.951250 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:03:42.952031 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:03:42.952084 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:03:42.953775 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 00:03:42.953815 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:03:42.955237 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:03:42.955276 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:03:42.956920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:03:42.956964 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:03:42.958862 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:03:42.959166 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:03:43.015412 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:03:43.015527 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:03:43.016975 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:03:43.019811 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:03:43.019858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:03:43.028261 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:03:43.033634 systemd[1]: Switching root. May 13 00:03:43.053952 systemd-journald[238]: Journal stopped May 13 00:03:43.770407 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 13 00:03:43.770463 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:03:43.770476 kernel: SELinux: policy capability open_perms=1 May 13 00:03:43.770485 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:03:43.770495 kernel: SELinux: policy capability always_check_network=0 May 13 00:03:43.770508 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:03:43.770517 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:03:43.770530 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:03:43.770539 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:03:43.770549 kernel: audit: type=1403 audit(1747094623.224:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:03:43.770559 systemd[1]: Successfully loaded SELinux policy in 31.292ms. May 13 00:03:43.770576 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.394ms. May 13 00:03:43.770588 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:03:43.770598 systemd[1]: Detected virtualization kvm. May 13 00:03:43.770622 systemd[1]: Detected architecture arm64. May 13 00:03:43.770633 systemd[1]: Detected first boot. May 13 00:03:43.770646 systemd[1]: Initializing machine ID from VM UUID. May 13 00:03:43.770658 zram_generator::config[1044]: No configuration found. May 13 00:03:43.770669 systemd[1]: Populated /etc with preset unit settings. May 13 00:03:43.770679 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:03:43.770690 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:03:43.770702 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:03:43.770714 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:03:43.770724 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:03:43.770734 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:03:43.770744 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:03:43.770754 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:03:43.770765 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:03:43.770775 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:03:43.770787 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:03:43.770798 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:03:43.770808 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:03:43.770818 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:03:43.770829 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:03:43.770839 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:03:43.770850 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:03:43.770860 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 00:03:43.770870 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:03:43.770883 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:03:43.770894 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:03:43.770904 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:03:43.770914 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:03:43.770925 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:03:43.770935 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:03:43.770945 systemd[1]: Reached target slices.target - Slice Units. May 13 00:03:43.770955 systemd[1]: Reached target swap.target - Swaps. May 13 00:03:43.770967 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:03:43.770978 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:03:43.770988 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:03:43.770999 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:03:43.771009 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:03:43.771019 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:03:43.771030 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:03:43.771040 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:03:43.771050 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:03:43.771062 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:03:43.771072 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:03:43.771083 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:03:43.771102 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:03:43.771113 systemd[1]: Reached target machines.target - Containers. May 13 00:03:43.771123 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:03:43.771134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:03:43.771144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:03:43.771154 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:03:43.771167 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:03:43.771177 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:03:43.771187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:03:43.771198 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:03:43.771208 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:03:43.771218 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:03:43.771228 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:03:43.771239 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:03:43.771250 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:03:43.771260 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:03:43.771270 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:03:43.771280 kernel: ACPI: bus type drm_connector registered May 13 00:03:43.771289 kernel: fuse: init (API version 7.39) May 13 00:03:43.771299 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:03:43.771308 kernel: loop: module loaded May 13 00:03:43.771319 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:03:43.771329 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:03:43.771341 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:03:43.771351 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:03:43.771362 systemd[1]: Stopped verity-setup.service. May 13 00:03:43.771388 systemd-journald[1115]: Collecting audit messages is disabled. May 13 00:03:43.771409 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:03:43.771419 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:03:43.771430 systemd-journald[1115]: Journal started May 13 00:03:43.771452 systemd-journald[1115]: Runtime Journal (/run/log/journal/f18d560c62b240ae869b595124014e4f) is 5.9M, max 47.3M, 41.4M free. May 13 00:03:43.584641 systemd[1]: Queued start job for default target multi-user.target. May 13 00:03:43.601038 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:03:43.601376 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:03:43.774155 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:03:43.775055 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:03:43.775970 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:03:43.776933 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:03:43.777881 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:03:43.779113 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:03:43.780247 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:03:43.781565 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:03:43.781856 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:03:43.783025 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:03:43.783190 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:03:43.784346 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:03:43.784476 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:03:43.785736 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:03:43.785865 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:03:43.788428 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:03:43.788580 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:03:43.789650 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:03:43.789777 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:03:43.791058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:03:43.792291 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:03:43.793447 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:03:43.806838 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:03:43.817262 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:03:43.819102 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:03:43.819927 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:03:43.819969 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:03:43.821690 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:03:43.823600 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:03:43.825520 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:03:43.826402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:03:43.827883 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:03:43.829624 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:03:43.830522 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:03:43.834343 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:03:43.835247 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:03:43.837027 systemd-journald[1115]: Time spent on flushing to /var/log/journal/f18d560c62b240ae869b595124014e4f is 25.039ms for 840 entries. May 13 00:03:43.837027 systemd-journald[1115]: System Journal (/var/log/journal/f18d560c62b240ae869b595124014e4f) is 8.0M, max 195.6M, 187.6M free. May 13 00:03:43.867898 systemd-journald[1115]: Received client request to flush runtime journal. May 13 00:03:43.867938 kernel: loop0: detected capacity change from 0 to 116808 May 13 00:03:43.837329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:03:43.843333 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:03:43.848349 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:03:43.852466 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:03:43.853766 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:03:43.854927 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:03:43.856162 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:03:43.857807 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:03:43.861476 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:03:43.870764 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:03:43.875158 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:03:43.877658 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:03:43.881250 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:03:43.888167 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:03:43.889895 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:03:43.892212 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:03:43.895175 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:03:43.902062 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. May 13 00:03:43.902079 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. May 13 00:03:43.907001 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:03:43.923280 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:03:43.928111 kernel: loop1: detected capacity change from 0 to 113536 May 13 00:03:43.948007 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:03:43.957374 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:03:43.963120 kernel: loop2: detected capacity change from 0 to 201592 May 13 00:03:43.971054 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 13 00:03:43.971075 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 13 00:03:43.977134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:03:43.998229 kernel: loop3: detected capacity change from 0 to 116808 May 13 00:03:44.003123 kernel: loop4: detected capacity change from 0 to 113536 May 13 00:03:44.008180 kernel: loop5: detected capacity change from 0 to 201592 May 13 00:03:44.014865 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:03:44.015275 (sd-merge)[1184]: Merged extensions into '/usr'. May 13 00:03:44.018782 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:03:44.018803 systemd[1]: Reloading... May 13 00:03:44.065121 zram_generator::config[1210]: No configuration found. May 13 00:03:44.102901 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:03:44.158922 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:03:44.195031 systemd[1]: Reloading finished in 175 ms. May 13 00:03:44.221465 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:03:44.222709 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:03:44.241328 systemd[1]: Starting ensure-sysext.service... May 13 00:03:44.243181 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:03:44.254748 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... May 13 00:03:44.254766 systemd[1]: Reloading... May 13 00:03:44.265386 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:03:44.265669 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:03:44.266763 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:03:44.267061 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 13 00:03:44.267194 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 13 00:03:44.269476 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:03:44.269577 systemd-tmpfiles[1245]: Skipping /boot May 13 00:03:44.277014 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:03:44.277141 systemd-tmpfiles[1245]: Skipping /boot May 13 00:03:44.300121 zram_generator::config[1272]: No configuration found. May 13 00:03:44.388376 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:03:44.423450 systemd[1]: Reloading finished in 168 ms. May 13 00:03:44.439184 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:03:44.452549 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:03:44.459938 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 00:03:44.462347 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:03:44.464594 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:03:44.468430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:03:44.474613 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:03:44.483586 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:03:44.486629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:03:44.490391 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:03:44.492580 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:03:44.500529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:03:44.501628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:03:44.506813 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:03:44.508475 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:03:44.509979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:03:44.510126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:03:44.511572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:03:44.511711 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:03:44.513193 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:03:44.513312 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:03:44.521519 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:03:44.521762 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:03:44.529479 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:03:44.532136 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:03:44.534752 systemd-udevd[1313]: Using default interface naming scheme 'v255'. May 13 00:03:44.537183 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:03:44.537445 augenrules[1343]: No rules May 13 00:03:44.539524 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:03:44.542463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:03:44.545430 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:03:44.546294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:03:44.546420 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:03:44.550030 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:03:44.550285 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 00:03:44.551626 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:03:44.553210 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:03:44.554871 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:03:44.556477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:03:44.556628 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:03:44.558308 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:03:44.558453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:03:44.562907 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:03:44.564802 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:03:44.566276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:03:44.579749 systemd[1]: Finished ensure-sysext.service. May 13 00:03:44.602723 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 00:03:44.603108 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1382) May 13 00:03:44.603943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:03:44.608323 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:03:44.610377 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:03:44.614565 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:03:44.618567 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:03:44.622315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:03:44.625340 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:03:44.629003 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:03:44.630737 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:03:44.631243 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:03:44.633134 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:03:44.641368 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 00:03:44.641676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:03:44.641865 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:03:44.654762 systemd-resolved[1311]: Positive Trust Anchors: May 13 00:03:44.654837 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:03:44.654869 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:03:44.660887 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:03:44.661075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:03:44.662786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:03:44.665727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:03:44.667136 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:03:44.668558 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:03:44.670825 augenrules[1381]: /sbin/augenrules: No change May 13 00:03:44.679433 augenrules[1415]: No rules May 13 00:03:44.680244 systemd-resolved[1311]: Defaulting to hostname 'linux'. May 13 00:03:44.682964 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:03:44.683173 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 00:03:44.687299 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:03:44.688488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:03:44.697392 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:03:44.704256 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:03:44.728868 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:03:44.730342 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:03:44.735159 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:03:44.742949 systemd-networkd[1392]: lo: Link UP May 13 00:03:44.742960 systemd-networkd[1392]: lo: Gained carrier May 13 00:03:44.746334 systemd-networkd[1392]: Enumeration completed May 13 00:03:44.751376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:03:44.752588 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:03:44.755209 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:03:44.755217 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:03:44.755309 systemd[1]: Reached target network.target - Network. May 13 00:03:44.756019 systemd-networkd[1392]: eth0: Link UP May 13 00:03:44.756022 systemd-networkd[1392]: eth0: Gained carrier May 13 00:03:44.756035 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:03:44.757672 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:03:44.767151 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:03:44.770168 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:03:44.770945 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. May 13 00:03:44.771841 systemd-timesyncd[1393]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:03:44.771884 systemd-timesyncd[1393]: Initial clock synchronization to Tue 2025-05-13 00:03:45.140631 UTC. May 13 00:03:44.777286 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:03:44.793589 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:03:44.804134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:03:44.829745 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:03:44.831001 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:03:44.831891 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:03:44.832786 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:03:44.833750 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:03:44.834853 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:03:44.835793 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:03:44.836746 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:03:44.837647 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:03:44.837682 systemd[1]: Reached target paths.target - Path Units. May 13 00:03:44.838344 systemd[1]: Reached target timers.target - Timer Units. May 13 00:03:44.839914 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:03:44.842487 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:03:44.852100 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:03:44.854402 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:03:44.855995 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:03:44.857206 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:03:44.858150 systemd[1]: Reached target basic.target - Basic System. May 13 00:03:44.859106 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:03:44.859138 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:03:44.860130 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:03:44.862115 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:03:44.863292 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:03:44.865941 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:03:44.868523 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:03:44.869654 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:03:44.873319 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:03:44.877149 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:03:44.881354 jq[1444]: false May 13 00:03:44.881871 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:03:44.887137 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:03:44.895585 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:03:44.897539 extend-filesystems[1445]: Found loop3 May 13 00:03:44.897539 extend-filesystems[1445]: Found loop4 May 13 00:03:44.897539 extend-filesystems[1445]: Found loop5 May 13 00:03:44.897539 extend-filesystems[1445]: Found vda May 13 00:03:44.897539 extend-filesystems[1445]: Found vda1 May 13 00:03:44.897539 extend-filesystems[1445]: Found vda2 May 13 00:03:44.897539 extend-filesystems[1445]: Found vda3 May 13 00:03:44.897539 extend-filesystems[1445]: Found usr May 13 00:03:44.897539 extend-filesystems[1445]: Found vda4 May 13 00:03:44.897539 extend-filesystems[1445]: Found vda6 May 13 00:03:44.897539 extend-filesystems[1445]: Found vda7 May 13 00:03:44.897539 extend-filesystems[1445]: Found vda9 May 13 00:03:44.897539 extend-filesystems[1445]: Checking size of /dev/vda9 May 13 00:03:44.896236 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:03:44.899261 dbus-daemon[1443]: [system] SELinux support is enabled May 13 00:03:44.896989 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:03:44.899356 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:03:44.900808 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:03:44.920325 jq[1459]: true May 13 00:03:44.905745 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:03:44.910195 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:03:44.910362 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:03:44.910633 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:03:44.910772 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:03:44.919390 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:03:44.919444 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:03:44.921138 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:03:44.921163 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:03:44.928199 extend-filesystems[1445]: Resized partition /dev/vda9 May 13 00:03:44.929016 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:03:44.929256 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:03:44.929435 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:03:44.932718 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) May 13 00:03:44.936115 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:03:44.949905 jq[1467]: true May 13 00:03:44.957835 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:03:44.963144 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1364) May 13 00:03:44.967892 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:03:44.970875 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:03:44.970875 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:03:44.970875 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:03:44.982104 extend-filesystems[1445]: Resized filesystem in /dev/vda9 May 13 00:03:44.983708 update_engine[1455]: I20250513 00:03:44.972745 1455 main.cc:92] Flatcar Update Engine starting May 13 00:03:44.983708 update_engine[1455]: I20250513 00:03:44.982046 1455 update_check_scheduler.cc:74] Next update check in 6m35s May 13 00:03:44.971353 systemd-logind[1450]: New seat seat0. May 13 00:03:44.972178 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:03:44.972483 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:03:44.978226 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:03:44.985501 systemd[1]: Started update-engine.service - Update Engine. May 13 00:03:44.992365 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:03:45.028953 bash[1495]: Updated "/home/core/.ssh/authorized_keys" May 13 00:03:45.031239 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:03:45.033253 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:03:45.045441 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:03:45.152581 containerd[1470]: time="2025-05-13T00:03:45.152432468Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 13 00:03:45.181886 containerd[1470]: time="2025-05-13T00:03:45.181622605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:03:45.183083 containerd[1470]: time="2025-05-13T00:03:45.183048536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:03:45.183171 containerd[1470]: time="2025-05-13T00:03:45.183156201Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:03:45.183229 containerd[1470]: time="2025-05-13T00:03:45.183216290Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:03:45.183453 containerd[1470]: time="2025-05-13T00:03:45.183434257Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:03:45.183520 containerd[1470]: time="2025-05-13T00:03:45.183507652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:03:45.183633 containerd[1470]: time="2025-05-13T00:03:45.183615945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:03:45.183688 containerd[1470]: time="2025-05-13T00:03:45.183675490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:03:45.183938 containerd[1470]: time="2025-05-13T00:03:45.183915551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:03:45.184012 containerd[1470]: time="2025-05-13T00:03:45.183997943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:03:45.184069 containerd[1470]: time="2025-05-13T00:03:45.184055981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:03:45.184853 containerd[1470]: time="2025-05-13T00:03:45.184101842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:03:45.184853 containerd[1470]: time="2025-05-13T00:03:45.184225409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:03:45.184853 containerd[1470]: time="2025-05-13T00:03:45.184454297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:03:45.184853 containerd[1470]: time="2025-05-13T00:03:45.184572048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:03:45.184853 containerd[1470]: time="2025-05-13T00:03:45.184586149Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:03:45.184853 containerd[1470]: time="2025-05-13T00:03:45.184658875Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:03:45.184853 containerd[1470]: time="2025-05-13T00:03:45.184699631Z" level=info msg="metadata content store policy set" policy=shared May 13 00:03:45.230787 containerd[1470]: time="2025-05-13T00:03:45.230706689Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:03:45.231021 containerd[1470]: time="2025-05-13T00:03:45.231004830Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:03:45.231186 containerd[1470]: time="2025-05-13T00:03:45.231170826Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:03:45.231282 containerd[1470]: time="2025-05-13T00:03:45.231268993Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:03:45.231349 containerd[1470]: time="2025-05-13T00:03:45.231336112Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:03:45.231629 containerd[1470]: time="2025-05-13T00:03:45.231611071Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:03:45.232039 containerd[1470]: time="2025-05-13T00:03:45.232012150Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:03:45.232294 containerd[1470]: time="2025-05-13T00:03:45.232275058Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:03:45.232378 containerd[1470]: time="2025-05-13T00:03:45.232365191Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:03:45.232446 containerd[1470]: time="2025-05-13T00:03:45.232421680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:03:45.232500 containerd[1470]: time="2025-05-13T00:03:45.232487753Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:03:45.232584 containerd[1470]: time="2025-05-13T00:03:45.232570772Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:03:45.232656 containerd[1470]: time="2025-05-13T00:03:45.232643916Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:03:45.232733 containerd[1470]: time="2025-05-13T00:03:45.232720073Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:03:45.232796 containerd[1470]: time="2025-05-13T00:03:45.232778780Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:03:45.232855 containerd[1470]: time="2025-05-13T00:03:45.232842886Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:03:45.232975 containerd[1470]: time="2025-05-13T00:03:45.232959130Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:03:45.233041 containerd[1470]: time="2025-05-13T00:03:45.233029888Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:03:45.233122 containerd[1470]: time="2025-05-13T00:03:45.233093450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233176 containerd[1470]: time="2025-05-13T00:03:45.233165130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233255 containerd[1470]: time="2025-05-13T00:03:45.233241244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233339 containerd[1470]: time="2025-05-13T00:03:45.233325393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233404 containerd[1470]: time="2025-05-13T00:03:45.233392219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233471 containerd[1470]: time="2025-05-13T00:03:45.233451680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233528 containerd[1470]: time="2025-05-13T00:03:45.233515827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233588 containerd[1470]: time="2025-05-13T00:03:45.233568175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233745 containerd[1470]: time="2025-05-13T00:03:45.233643704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233745 containerd[1470]: time="2025-05-13T00:03:45.233674250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233745 containerd[1470]: time="2025-05-13T00:03:45.233689440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233745 containerd[1470]: time="2025-05-13T00:03:45.233711450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233745 containerd[1470]: time="2025-05-13T00:03:45.233725300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:03:45.233881 containerd[1470]: time="2025-05-13T00:03:45.233866316Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:03:45.234073 containerd[1470]: time="2025-05-13T00:03:45.234010093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:03:45.234073 containerd[1470]: time="2025-05-13T00:03:45.234031308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:03:45.234073 containerd[1470]: time="2025-05-13T00:03:45.234042522Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:03:45.234406 containerd[1470]: time="2025-05-13T00:03:45.234364766Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:03:45.234592 containerd[1470]: time="2025-05-13T00:03:45.234394936Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:03:45.234592 containerd[1470]: time="2025-05-13T00:03:45.234530805Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:03:45.234592 containerd[1470]: time="2025-05-13T00:03:45.234546664Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:03:45.234592 containerd[1470]: time="2025-05-13T00:03:45.234557125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:03:45.234592 containerd[1470]: time="2025-05-13T00:03:45.234570933Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:03:45.234859 containerd[1470]: time="2025-05-13T00:03:45.234581896Z" level=info msg="NRI interface is disabled by configuration." May 13 00:03:45.234859 containerd[1470]: time="2025-05-13T00:03:45.234737934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:03:45.235350 containerd[1470]: time="2025-05-13T00:03:45.235230986Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:03:45.235350 containerd[1470]: time="2025-05-13T00:03:45.235293042Z" level=info msg="Connect containerd service" May 13 00:03:45.235709 containerd[1470]: time="2025-05-13T00:03:45.235524316Z" level=info msg="using legacy CRI server" May 13 00:03:45.235709 containerd[1470]: time="2025-05-13T00:03:45.235542351Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:03:45.235903 containerd[1470]: time="2025-05-13T00:03:45.235887357Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:03:45.236897 containerd[1470]: time="2025-05-13T00:03:45.236870617Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:03:45.237287 containerd[1470]: time="2025-05-13T00:03:45.237173779Z" level=info msg="Start subscribing containerd event" May 13 00:03:45.237575 containerd[1470]: time="2025-05-13T00:03:45.237556697Z" level=info msg="Start recovering state" May 13 00:03:45.238218 containerd[1470]: time="2025-05-13T00:03:45.238023304Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:03:45.238382 containerd[1470]: time="2025-05-13T00:03:45.238277550Z" level=info msg="Start event monitor" May 13 00:03:45.238523 containerd[1470]: time="2025-05-13T00:03:45.238507234Z" level=info msg="Start snapshots syncer" May 13 00:03:45.238581 containerd[1470]: time="2025-05-13T00:03:45.238568871Z" level=info msg="Start cni network conf syncer for default" May 13 00:03:45.238667 containerd[1470]: time="2025-05-13T00:03:45.238654066Z" level=info msg="Start streaming server" May 13 00:03:45.238892 containerd[1470]: time="2025-05-13T00:03:45.238425972Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:03:45.239232 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:03:45.242897 containerd[1470]: time="2025-05-13T00:03:45.241409895Z" level=info msg="containerd successfully booted in 0.089970s" May 13 00:03:45.303711 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:03:45.325875 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:03:45.336524 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:03:45.342393 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:03:45.342599 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:03:45.345117 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:03:45.367466 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:03:45.382592 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:03:45.385040 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 00:03:45.386322 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:03:46.748182 systemd-networkd[1392]: eth0: Gained IPv6LL May 13 00:03:46.750732 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:03:46.752360 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:03:46.768403 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:03:46.770753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:03:46.772827 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:03:46.795408 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:03:46.795623 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:03:46.797935 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:03:46.798539 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:03:47.390341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:03:47.392172 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:03:47.395800 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:03:47.398424 systemd[1]: Startup finished in 598ms (kernel) + 4.518s (initrd) + 4.208s (userspace) = 9.325s. May 13 00:03:47.895630 kubelet[1549]: E0513 00:03:47.895558 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:03:47.898108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:03:47.898311 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:03:51.546562 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:03:51.547742 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:53058.service - OpenSSH per-connection server daemon (10.0.0.1:53058). May 13 00:03:51.633173 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 53058 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:51.635173 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:51.654163 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:03:51.666385 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:03:51.669545 systemd-logind[1450]: New session 1 of user core. May 13 00:03:51.679411 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:03:51.693461 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:03:51.696140 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:03:51.772022 systemd[1566]: Queued start job for default target default.target. May 13 00:03:51.781099 systemd[1566]: Created slice app.slice - User Application Slice. May 13 00:03:51.781148 systemd[1566]: Reached target paths.target - Paths. May 13 00:03:51.781160 systemd[1566]: Reached target timers.target - Timers. May 13 00:03:51.782349 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:03:51.793150 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:03:51.793214 systemd[1566]: Reached target sockets.target - Sockets. May 13 00:03:51.793227 systemd[1566]: Reached target basic.target - Basic System. May 13 00:03:51.793263 systemd[1566]: Reached target default.target - Main User Target. May 13 00:03:51.793290 systemd[1566]: Startup finished in 91ms. May 13 00:03:51.793657 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:03:51.794952 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:03:51.860662 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:53062.service - OpenSSH per-connection server daemon (10.0.0.1:53062). May 13 00:03:51.920232 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 53062 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:51.921579 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:51.926387 systemd-logind[1450]: New session 2 of user core. May 13 00:03:51.939310 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:03:52.000429 sshd[1579]: Connection closed by 10.0.0.1 port 53062 May 13 00:03:52.000779 sshd-session[1577]: pam_unix(sshd:session): session closed for user core May 13 00:03:52.024701 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:53062.service: Deactivated successfully. May 13 00:03:52.026191 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:03:52.032347 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. May 13 00:03:52.032862 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:53070.service - OpenSSH per-connection server daemon (10.0.0.1:53070). May 13 00:03:52.034712 systemd-logind[1450]: Removed session 2. May 13 00:03:52.082609 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 53070 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:52.083973 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:52.089053 systemd-logind[1450]: New session 3 of user core. May 13 00:03:52.100355 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:03:52.150710 sshd[1586]: Connection closed by 10.0.0.1 port 53070 May 13 00:03:52.151196 sshd-session[1584]: pam_unix(sshd:session): session closed for user core May 13 00:03:52.165097 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:53070.service: Deactivated successfully. May 13 00:03:52.166784 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:03:52.168369 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. May 13 00:03:52.181529 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:53074.service - OpenSSH per-connection server daemon (10.0.0.1:53074). May 13 00:03:52.186488 systemd-logind[1450]: Removed session 3. May 13 00:03:52.235678 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 53074 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:52.237197 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:52.241195 systemd-logind[1450]: New session 4 of user core. May 13 00:03:52.253355 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:03:52.308150 sshd[1593]: Connection closed by 10.0.0.1 port 53074 May 13 00:03:52.308505 sshd-session[1591]: pam_unix(sshd:session): session closed for user core May 13 00:03:52.321401 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:53074.service: Deactivated successfully. May 13 00:03:52.322751 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:03:52.325240 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. May 13 00:03:52.334457 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:53120.service - OpenSSH per-connection server daemon (10.0.0.1:53120). May 13 00:03:52.338920 systemd-logind[1450]: Removed session 4. May 13 00:03:52.397931 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 53120 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:52.399338 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:52.403274 systemd-logind[1450]: New session 5 of user core. May 13 00:03:52.412293 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:03:52.482693 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:03:52.482970 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:52.497469 sudo[1601]: pam_unix(sudo:session): session closed for user root May 13 00:03:52.499220 sshd[1600]: Connection closed by 10.0.0.1 port 53120 May 13 00:03:52.500381 sshd-session[1598]: pam_unix(sshd:session): session closed for user core May 13 00:03:52.514106 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:53120.service: Deactivated successfully. May 13 00:03:52.515619 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:03:52.516974 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. May 13 00:03:52.519281 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:53122.service - OpenSSH per-connection server daemon (10.0.0.1:53122). May 13 00:03:52.520298 systemd-logind[1450]: Removed session 5. May 13 00:03:52.566279 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 53122 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:52.567571 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:52.572623 systemd-logind[1450]: New session 6 of user core. May 13 00:03:52.583342 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:03:52.636882 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:03:52.637196 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:52.640790 sudo[1610]: pam_unix(sudo:session): session closed for user root May 13 00:03:52.645694 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 00:03:52.645972 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:52.666443 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 00:03:52.700680 augenrules[1632]: No rules May 13 00:03:52.702160 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:03:52.702376 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 00:03:52.703787 sudo[1609]: pam_unix(sudo:session): session closed for user root May 13 00:03:52.706205 sshd[1608]: Connection closed by 10.0.0.1 port 53122 May 13 00:03:52.706046 sshd-session[1606]: pam_unix(sshd:session): session closed for user core May 13 00:03:52.715770 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:53122.service: Deactivated successfully. May 13 00:03:52.717485 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:03:52.720155 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. May 13 00:03:52.728413 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:53132.service - OpenSSH per-connection server daemon (10.0.0.1:53132). May 13 00:03:52.729784 systemd-logind[1450]: Removed session 6. May 13 00:03:52.772656 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 53132 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:52.774206 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:52.778680 systemd-logind[1450]: New session 7 of user core. May 13 00:03:52.796342 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:03:52.852649 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:03:52.852933 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:52.873435 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:03:52.890455 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:03:52.890671 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:03:53.410368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:03:53.420465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:03:53.444886 systemd[1]: Reloading requested from client PID 1684 ('systemctl') (unit session-7.scope)... May 13 00:03:53.445023 systemd[1]: Reloading... May 13 00:03:53.529249 zram_generator::config[1722]: No configuration found. May 13 00:03:53.718907 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:03:53.773466 systemd[1]: Reloading finished in 327 ms. May 13 00:03:53.815310 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:03:53.818591 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:03:53.818811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:03:53.820545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:03:53.938397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:03:53.942590 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:03:53.980826 kubelet[1770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:03:53.980826 kubelet[1770]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:03:53.980826 kubelet[1770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:03:53.980826 kubelet[1770]: I0513 00:03:53.980546 1770 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:03:55.030264 kubelet[1770]: I0513 00:03:55.030224 1770 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:03:55.030264 kubelet[1770]: I0513 00:03:55.030253 1770 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:03:55.030657 kubelet[1770]: I0513 00:03:55.030519 1770 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:03:55.087537 kubelet[1770]: I0513 00:03:55.084079 1770 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:03:55.096949 kubelet[1770]: E0513 00:03:55.096657 1770 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:03:55.096949 kubelet[1770]: I0513 00:03:55.096692 1770 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:03:55.099614 kubelet[1770]: I0513 00:03:55.099310 1770 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:03:55.100053 kubelet[1770]: I0513 00:03:55.100003 1770 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:03:55.100314 kubelet[1770]: I0513 00:03:55.100138 1770 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.133","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:03:55.100530 kubelet[1770]: I0513 00:03:55.100515 1770 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:03:55.100587 kubelet[1770]: I0513 00:03:55.100579 1770 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:03:55.100829 kubelet[1770]: I0513 00:03:55.100814 1770 state_mem.go:36] "Initialized new in-memory state store" May 13 00:03:55.104618 kubelet[1770]: I0513 00:03:55.104594 1770 kubelet.go:446] "Attempting to sync node with API server" May 13 00:03:55.105370 kubelet[1770]: I0513 00:03:55.105147 1770 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:03:55.105370 kubelet[1770]: I0513 00:03:55.105187 1770 kubelet.go:352] "Adding apiserver pod source" May 13 00:03:55.105370 kubelet[1770]: E0513 00:03:55.105286 1770 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:55.105370 kubelet[1770]: I0513 00:03:55.105317 1770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:03:55.105370 kubelet[1770]: E0513 00:03:55.105338 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:55.113169 kubelet[1770]: I0513 00:03:55.113139 1770 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 13 00:03:55.113776 kubelet[1770]: I0513 00:03:55.113763 1770 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:03:55.113897 kubelet[1770]: W0513 00:03:55.113881 1770 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:03:55.114915 kubelet[1770]: I0513 00:03:55.114682 1770 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:03:55.114915 kubelet[1770]: I0513 00:03:55.114717 1770 server.go:1287] "Started kubelet" May 13 00:03:55.116106 kubelet[1770]: I0513 00:03:55.115968 1770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:03:55.116934 kubelet[1770]: I0513 00:03:55.116916 1770 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:03:55.117368 kubelet[1770]: I0513 00:03:55.117164 1770 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:03:55.120119 kubelet[1770]: I0513 00:03:55.120004 1770 server.go:490] "Adding debug handlers to kubelet server" May 13 00:03:55.121346 kubelet[1770]: E0513 00:03:55.121263 1770 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:03:55.122933 kubelet[1770]: I0513 00:03:55.122706 1770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:03:55.122933 kubelet[1770]: I0513 00:03:55.122732 1770 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:03:55.122933 kubelet[1770]: I0513 00:03:55.122830 1770 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:03:55.123448 kubelet[1770]: I0513 00:03:55.123424 1770 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:03:55.123886 kubelet[1770]: I0513 00:03:55.123610 1770 reconciler.go:26] "Reconciler: start to sync state" May 13 00:03:55.129415 kubelet[1770]: E0513 00:03:55.128138 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:55.129415 kubelet[1770]: W0513 00:03:55.128701 1770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 00:03:55.129415 kubelet[1770]: E0513 00:03:55.128760 1770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 13 00:03:55.129415 kubelet[1770]: W0513 00:03:55.128853 1770 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.133" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 00:03:55.129415 kubelet[1770]: E0513 00:03:55.128868 1770 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.133\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 13 00:03:55.129601 kubelet[1770]: E0513 00:03:55.128653 1770 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.133.183eed506b008b51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.133,UID:10.0.0.133,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.133,},FirstTimestamp:2025-05-13 00:03:55.114695505 +0000 UTC m=+1.169150898,LastTimestamp:2025-05-13 00:03:55.114695505 +0000 UTC m=+1.169150898,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.133,}" May 13 00:03:55.131266 kubelet[1770]: I0513 00:03:55.130494 1770 factory.go:221] Registration of the systemd container factory successfully May 13 00:03:55.131266 kubelet[1770]: I0513 00:03:55.130724 1770 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:03:55.133385 kubelet[1770]: I0513 00:03:55.133361 1770 factory.go:221] Registration of the containerd container factory successfully May 13 00:03:55.136832 kubelet[1770]: E0513 00:03:55.136033 1770 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.133.183eed506b649c7e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.133,UID:10.0.0.133,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.133,},FirstTimestamp:2025-05-13 00:03:55.121253502 +0000 UTC m=+1.175708935,LastTimestamp:2025-05-13 00:03:55.121253502 +0000 UTC m=+1.175708935,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.133,}" May 13 00:03:55.140838 kubelet[1770]: I0513 00:03:55.140818 1770 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:03:55.140838 kubelet[1770]: I0513 00:03:55.140832 1770 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:03:55.140989 kubelet[1770]: I0513 00:03:55.140850 1770 state_mem.go:36] "Initialized new in-memory state store" May 13 00:03:55.150012 kubelet[1770]: E0513 00:03:55.149964 1770 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.133\" not found" node="10.0.0.133" May 13 00:03:55.214694 kubelet[1770]: I0513 00:03:55.214653 1770 policy_none.go:49] "None policy: Start" May 13 00:03:55.214694 kubelet[1770]: I0513 00:03:55.214686 1770 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:03:55.214694 kubelet[1770]: I0513 00:03:55.214698 1770 state_mem.go:35] "Initializing new in-memory state store" May 13 00:03:55.224899 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:03:55.228882 kubelet[1770]: E0513 00:03:55.228826 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:55.239319 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:03:55.242697 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:03:55.249823 kubelet[1770]: I0513 00:03:55.249780 1770 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:03:55.250004 kubelet[1770]: I0513 00:03:55.249978 1770 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:03:55.250041 kubelet[1770]: I0513 00:03:55.249996 1770 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:03:55.250348 kubelet[1770]: I0513 00:03:55.250274 1770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:03:55.251708 kubelet[1770]: I0513 00:03:55.251524 1770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:03:55.252309 kubelet[1770]: E0513 00:03:55.252277 1770 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:03:55.252365 kubelet[1770]: E0513 00:03:55.252325 1770 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.133\" not found" May 13 00:03:55.252498 kubelet[1770]: I0513 00:03:55.252474 1770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:03:55.252498 kubelet[1770]: I0513 00:03:55.252499 1770 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:03:55.252556 kubelet[1770]: I0513 00:03:55.252517 1770 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:03:55.252556 kubelet[1770]: I0513 00:03:55.252524 1770 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:03:55.252737 kubelet[1770]: E0513 00:03:55.252617 1770 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 13 00:03:55.350959 kubelet[1770]: I0513 00:03:55.350853 1770 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.133" May 13 00:03:55.359074 kubelet[1770]: I0513 00:03:55.359047 1770 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.133" May 13 00:03:55.359074 kubelet[1770]: E0513 00:03:55.359078 1770 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.133\": node \"10.0.0.133\" not found" May 13 00:03:55.363633 kubelet[1770]: E0513 00:03:55.363584 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:55.465245 kubelet[1770]: E0513 00:03:55.465205 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:55.477327 sudo[1643]: pam_unix(sudo:session): session closed for user root May 13 00:03:55.478977 sshd[1642]: Connection closed by 10.0.0.1 port 53132 May 13 00:03:55.478889 sshd-session[1640]: pam_unix(sshd:session): session closed for user core May 13 00:03:55.482454 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:53132.service: Deactivated successfully. May 13 00:03:55.484051 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:03:55.484890 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. May 13 00:03:55.485986 systemd-logind[1450]: Removed session 7. May 13 00:03:55.566106 kubelet[1770]: E0513 00:03:55.566034 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:55.666960 kubelet[1770]: E0513 00:03:55.666845 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:55.767025 kubelet[1770]: E0513 00:03:55.766976 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:55.867437 kubelet[1770]: E0513 00:03:55.867403 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:55.967680 kubelet[1770]: E0513 00:03:55.967582 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:56.033113 kubelet[1770]: I0513 00:03:56.032919 1770 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 13 00:03:56.033113 kubelet[1770]: W0513 00:03:56.033120 1770 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 00:03:56.033468 kubelet[1770]: W0513 00:03:56.033154 1770 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 00:03:56.033468 kubelet[1770]: W0513 00:03:56.033175 1770 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 00:03:56.068396 kubelet[1770]: E0513 00:03:56.068346 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:56.106181 kubelet[1770]: E0513 00:03:56.106139 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:56.168508 kubelet[1770]: E0513 00:03:56.168467 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:56.269303 kubelet[1770]: E0513 00:03:56.269184 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:56.369507 kubelet[1770]: E0513 00:03:56.369465 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:56.469934 kubelet[1770]: E0513 00:03:56.469888 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:56.570640 kubelet[1770]: E0513 00:03:56.570497 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:56.670633 kubelet[1770]: E0513 00:03:56.670572 1770 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" May 13 00:03:56.771706 kubelet[1770]: I0513 00:03:56.771659 1770 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.2.0/24" May 13 00:03:56.772065 containerd[1470]: time="2025-05-13T00:03:56.771998428Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:03:56.772575 kubelet[1770]: I0513 00:03:56.772501 1770 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.2.0/24" May 13 00:03:57.106759 kubelet[1770]: E0513 00:03:57.106704 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:57.107125 kubelet[1770]: I0513 00:03:57.106757 1770 apiserver.go:52] "Watching apiserver" May 13 00:03:57.121517 systemd[1]: Created slice kubepods-burstable-pod72015ef3_d003_4160_9c2b_66e1890cf82c.slice - libcontainer container kubepods-burstable-pod72015ef3_d003_4160_9c2b_66e1890cf82c.slice. May 13 00:03:57.124475 kubelet[1770]: I0513 00:03:57.124430 1770 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:03:57.134002 kubelet[1770]: I0513 00:03:57.133962 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/41716785-0eec-4726-bd60-0695fc84e027-kube-proxy\") pod \"kube-proxy-h9vpz\" (UID: \"41716785-0eec-4726-bd60-0695fc84e027\") " pod="kube-system/kube-proxy-h9vpz" May 13 00:03:57.134085 kubelet[1770]: I0513 00:03:57.134007 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-run\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134085 kubelet[1770]: I0513 00:03:57.134028 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cni-path\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134085 kubelet[1770]: I0513 00:03:57.134046 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72015ef3-d003-4160-9c2b-66e1890cf82c-hubble-tls\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134178 kubelet[1770]: I0513 00:03:57.134091 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41716785-0eec-4726-bd60-0695fc84e027-xtables-lock\") pod \"kube-proxy-h9vpz\" (UID: \"41716785-0eec-4726-bd60-0695fc84e027\") " pod="kube-system/kube-proxy-h9vpz" May 13 00:03:57.134178 kubelet[1770]: I0513 00:03:57.134143 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-config-path\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134178 kubelet[1770]: I0513 00:03:57.134163 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-host-proc-sys-net\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134241 kubelet[1770]: I0513 00:03:57.134181 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-etc-cni-netd\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134241 kubelet[1770]: I0513 00:03:57.134196 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-lib-modules\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134241 kubelet[1770]: I0513 00:03:57.134214 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqkm\" (UniqueName: \"kubernetes.io/projected/41716785-0eec-4726-bd60-0695fc84e027-kube-api-access-bjqkm\") pod \"kube-proxy-h9vpz\" (UID: \"41716785-0eec-4726-bd60-0695fc84e027\") " pod="kube-system/kube-proxy-h9vpz" May 13 00:03:57.134241 kubelet[1770]: I0513 00:03:57.134231 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-bpf-maps\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134321 kubelet[1770]: I0513 00:03:57.134247 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-hostproc\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134321 kubelet[1770]: I0513 00:03:57.134264 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72015ef3-d003-4160-9c2b-66e1890cf82c-clustermesh-secrets\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134321 kubelet[1770]: I0513 00:03:57.134279 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-host-proc-sys-kernel\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134321 kubelet[1770]: I0513 00:03:57.134294 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tslvp\" (UniqueName: \"kubernetes.io/projected/72015ef3-d003-4160-9c2b-66e1890cf82c-kube-api-access-tslvp\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134321 kubelet[1770]: I0513 00:03:57.134307 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41716785-0eec-4726-bd60-0695fc84e027-lib-modules\") pod \"kube-proxy-h9vpz\" (UID: \"41716785-0eec-4726-bd60-0695fc84e027\") " pod="kube-system/kube-proxy-h9vpz" May 13 00:03:57.134419 kubelet[1770]: I0513 00:03:57.134321 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-cgroup\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.134419 kubelet[1770]: I0513 00:03:57.134336 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-xtables-lock\") pod \"cilium-q2cl9\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " pod="kube-system/cilium-q2cl9" May 13 00:03:57.143490 systemd[1]: Created slice kubepods-besteffort-pod41716785_0eec_4726_bd60_0695fc84e027.slice - libcontainer container kubepods-besteffort-pod41716785_0eec_4726_bd60_0695fc84e027.slice. May 13 00:03:57.441372 kubelet[1770]: E0513 00:03:57.441255 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:57.442215 containerd[1470]: time="2025-05-13T00:03:57.442174369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2cl9,Uid:72015ef3-d003-4160-9c2b-66e1890cf82c,Namespace:kube-system,Attempt:0,}" May 13 00:03:57.458078 kubelet[1770]: E0513 00:03:57.457815 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:57.458414 containerd[1470]: time="2025-05-13T00:03:57.458359461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9vpz,Uid:41716785-0eec-4726-bd60-0695fc84e027,Namespace:kube-system,Attempt:0,}" May 13 00:03:58.027353 containerd[1470]: time="2025-05-13T00:03:58.027296357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:03:58.029108 containerd[1470]: time="2025-05-13T00:03:58.029069288Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:03:58.029823 containerd[1470]: time="2025-05-13T00:03:58.029774089Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 00:03:58.030913 containerd[1470]: time="2025-05-13T00:03:58.030873149Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:03:58.031546 containerd[1470]: time="2025-05-13T00:03:58.031509639Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:03:58.034818 containerd[1470]: time="2025-05-13T00:03:58.034779477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:03:58.035759 containerd[1470]: time="2025-05-13T00:03:58.035722801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 593.447029ms" May 13 00:03:58.037863 containerd[1470]: time="2025-05-13T00:03:58.037823171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.373006ms" May 13 00:03:58.106929 kubelet[1770]: E0513 00:03:58.106840 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:58.168502 containerd[1470]: time="2025-05-13T00:03:58.166478117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:03:58.168502 containerd[1470]: time="2025-05-13T00:03:58.166594939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:03:58.168502 containerd[1470]: time="2025-05-13T00:03:58.166611755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:58.168502 containerd[1470]: time="2025-05-13T00:03:58.167102793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:03:58.168502 containerd[1470]: time="2025-05-13T00:03:58.167159772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:03:58.168502 containerd[1470]: time="2025-05-13T00:03:58.167175620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:58.168502 containerd[1470]: time="2025-05-13T00:03:58.167288006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:58.168502 containerd[1470]: time="2025-05-13T00:03:58.168423075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:58.245333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673434340.mount: Deactivated successfully. May 13 00:03:58.283295 systemd[1]: Started cri-containerd-0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2.scope - libcontainer container 0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2. May 13 00:03:58.284982 systemd[1]: Started cri-containerd-a5edca94ea6eb17ffa001ab6d5e571716b8508dfe56fb278e862e9fc6f50baa6.scope - libcontainer container a5edca94ea6eb17ffa001ab6d5e571716b8508dfe56fb278e862e9fc6f50baa6. May 13 00:03:58.307772 containerd[1470]: time="2025-05-13T00:03:58.307722222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2cl9,Uid:72015ef3-d003-4160-9c2b-66e1890cf82c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\"" May 13 00:03:58.309078 kubelet[1770]: E0513 00:03:58.309036 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:58.311470 containerd[1470]: time="2025-05-13T00:03:58.311438861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9vpz,Uid:41716785-0eec-4726-bd60-0695fc84e027,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5edca94ea6eb17ffa001ab6d5e571716b8508dfe56fb278e862e9fc6f50baa6\"" May 13 00:03:58.311566 containerd[1470]: time="2025-05-13T00:03:58.311528222Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:03:58.311971 kubelet[1770]: E0513 00:03:58.311951 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:59.107363 kubelet[1770]: E0513 00:03:59.107300 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:00.108502 kubelet[1770]: E0513 00:04:00.108436 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:01.108958 kubelet[1770]: E0513 00:04:01.108920 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:01.535493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754077625.mount: Deactivated successfully. May 13 00:04:02.109487 kubelet[1770]: E0513 00:04:02.109451 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:02.744185 containerd[1470]: time="2025-05-13T00:04:02.744135831Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:02.745129 containerd[1470]: time="2025-05-13T00:04:02.744922641Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 13 00:04:02.745829 containerd[1470]: time="2025-05-13T00:04:02.745784607Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:02.748154 containerd[1470]: time="2025-05-13T00:04:02.748127193Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.436551139s" May 13 00:04:02.748201 containerd[1470]: time="2025-05-13T00:04:02.748160391Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 00:04:02.749273 containerd[1470]: time="2025-05-13T00:04:02.749201607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 00:04:02.750467 containerd[1470]: time="2025-05-13T00:04:02.750436100Z" level=info msg="CreateContainer within sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:04:02.764742 containerd[1470]: time="2025-05-13T00:04:02.764635377Z" level=info msg="CreateContainer within sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1\"" May 13 00:04:02.765275 containerd[1470]: time="2025-05-13T00:04:02.765233373Z" level=info msg="StartContainer for \"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1\"" May 13 00:04:02.795264 systemd[1]: Started cri-containerd-0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1.scope - libcontainer container 0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1. May 13 00:04:02.820250 containerd[1470]: time="2025-05-13T00:04:02.820207521Z" level=info msg="StartContainer for \"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1\" returns successfully" May 13 00:04:02.862829 systemd[1]: cri-containerd-0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1.scope: Deactivated successfully. May 13 00:04:02.980521 containerd[1470]: time="2025-05-13T00:04:02.980425181Z" level=info msg="shim disconnected" id=0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1 namespace=k8s.io May 13 00:04:02.980521 containerd[1470]: time="2025-05-13T00:04:02.980504236Z" level=warning msg="cleaning up after shim disconnected" id=0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1 namespace=k8s.io May 13 00:04:02.980521 containerd[1470]: time="2025-05-13T00:04:02.980514846Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:04:03.110384 kubelet[1770]: E0513 00:04:03.110266 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:03.270499 kubelet[1770]: E0513 00:04:03.270473 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:03.272164 containerd[1470]: time="2025-05-13T00:04:03.272009445Z" level=info msg="CreateContainer within sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:04:03.282671 containerd[1470]: time="2025-05-13T00:04:03.282612536Z" level=info msg="CreateContainer within sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292\"" May 13 00:04:03.283430 containerd[1470]: time="2025-05-13T00:04:03.283393619Z" level=info msg="StartContainer for \"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292\"" May 13 00:04:03.314285 systemd[1]: Started cri-containerd-4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292.scope - libcontainer container 4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292. May 13 00:04:03.336117 containerd[1470]: time="2025-05-13T00:04:03.335910484Z" level=info msg="StartContainer for \"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292\" returns successfully" May 13 00:04:03.349164 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:04:03.349374 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:04:03.349449 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 00:04:03.355467 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:04:03.355682 systemd[1]: cri-containerd-4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292.scope: Deactivated successfully. May 13 00:04:03.367452 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:04:03.396883 containerd[1470]: time="2025-05-13T00:04:03.396814081Z" level=info msg="shim disconnected" id=4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292 namespace=k8s.io May 13 00:04:03.396883 containerd[1470]: time="2025-05-13T00:04:03.396872845Z" level=warning msg="cleaning up after shim disconnected" id=4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292 namespace=k8s.io May 13 00:04:03.397117 containerd[1470]: time="2025-05-13T00:04:03.396897547Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:04:03.762381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1-rootfs.mount: Deactivated successfully. May 13 00:04:03.969671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185154735.mount: Deactivated successfully. May 13 00:04:04.110524 kubelet[1770]: E0513 00:04:04.110416 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:04.188139 containerd[1470]: time="2025-05-13T00:04:04.188067334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:04.188862 containerd[1470]: time="2025-05-13T00:04:04.188809551Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 13 00:04:04.189683 containerd[1470]: time="2025-05-13T00:04:04.189643502Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:04.192322 containerd[1470]: time="2025-05-13T00:04:04.192279721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:04.193012 containerd[1470]: time="2025-05-13T00:04:04.192973884Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.443738567s" May 13 00:04:04.193012 containerd[1470]: time="2025-05-13T00:04:04.193007205Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 13 00:04:04.195376 containerd[1470]: time="2025-05-13T00:04:04.195336991Z" level=info msg="CreateContainer within sandbox \"a5edca94ea6eb17ffa001ab6d5e571716b8508dfe56fb278e862e9fc6f50baa6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:04:04.213321 containerd[1470]: time="2025-05-13T00:04:04.213196047Z" level=info msg="CreateContainer within sandbox \"a5edca94ea6eb17ffa001ab6d5e571716b8508dfe56fb278e862e9fc6f50baa6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41b6d98385b125500c2beb2b9b5f46604d0464d8a6f376ce96b19b85be616a57\"" May 13 00:04:04.213910 containerd[1470]: time="2025-05-13T00:04:04.213882020Z" level=info msg="StartContainer for \"41b6d98385b125500c2beb2b9b5f46604d0464d8a6f376ce96b19b85be616a57\"" May 13 00:04:04.245300 systemd[1]: Started cri-containerd-41b6d98385b125500c2beb2b9b5f46604d0464d8a6f376ce96b19b85be616a57.scope - libcontainer container 41b6d98385b125500c2beb2b9b5f46604d0464d8a6f376ce96b19b85be616a57. May 13 00:04:04.272346 containerd[1470]: time="2025-05-13T00:04:04.271776116Z" level=info msg="StartContainer for \"41b6d98385b125500c2beb2b9b5f46604d0464d8a6f376ce96b19b85be616a57\" returns successfully" May 13 00:04:04.277462 kubelet[1770]: E0513 00:04:04.276930 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:04.279634 kubelet[1770]: E0513 00:04:04.279602 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:04.291748 containerd[1470]: time="2025-05-13T00:04:04.291690521Z" level=info msg="CreateContainer within sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:04:04.308343 containerd[1470]: time="2025-05-13T00:04:04.306592592Z" level=info msg="CreateContainer within sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7\"" May 13 00:04:04.308480 kubelet[1770]: I0513 00:04:04.307750 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h9vpz" podStartSLOduration=3.42626909 podStartE2EDuration="9.307730246s" podCreationTimestamp="2025-05-13 00:03:55 +0000 UTC" firstStartedPulling="2025-05-13 00:03:58.312585867 +0000 UTC m=+4.367041219" lastFinishedPulling="2025-05-13 00:04:04.194046983 +0000 UTC m=+10.248502375" observedRunningTime="2025-05-13 00:04:04.286884015 +0000 UTC m=+10.341339407" watchObservedRunningTime="2025-05-13 00:04:04.307730246 +0000 UTC m=+10.362185639" May 13 00:04:04.308565 containerd[1470]: time="2025-05-13T00:04:04.308489927Z" level=info msg="StartContainer for \"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7\"" May 13 00:04:04.339320 systemd[1]: Started cri-containerd-37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7.scope - libcontainer container 37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7. May 13 00:04:04.367852 containerd[1470]: time="2025-05-13T00:04:04.367185496Z" level=info msg="StartContainer for \"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7\" returns successfully" May 13 00:04:04.402208 systemd[1]: cri-containerd-37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7.scope: Deactivated successfully. May 13 00:04:04.554349 containerd[1470]: time="2025-05-13T00:04:04.554291603Z" level=info msg="shim disconnected" id=37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7 namespace=k8s.io May 13 00:04:04.554349 containerd[1470]: time="2025-05-13T00:04:04.554346804Z" level=warning msg="cleaning up after shim disconnected" id=37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7 namespace=k8s.io May 13 00:04:04.554349 containerd[1470]: time="2025-05-13T00:04:04.554357041Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:04:05.110561 kubelet[1770]: E0513 00:04:05.110511 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:05.283388 kubelet[1770]: E0513 00:04:05.283356 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:05.283522 kubelet[1770]: E0513 00:04:05.283410 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:05.285389 containerd[1470]: time="2025-05-13T00:04:05.285267237Z" level=info msg="CreateContainer within sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:04:05.299726 containerd[1470]: time="2025-05-13T00:04:05.299620576Z" level=info msg="CreateContainer within sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507\"" May 13 00:04:05.300274 containerd[1470]: time="2025-05-13T00:04:05.300246727Z" level=info msg="StartContainer for \"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507\"" May 13 00:04:05.333304 systemd[1]: Started cri-containerd-1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507.scope - libcontainer container 1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507. May 13 00:04:05.354724 systemd[1]: cri-containerd-1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507.scope: Deactivated successfully. May 13 00:04:05.356926 containerd[1470]: time="2025-05-13T00:04:05.356862423Z" level=info msg="StartContainer for \"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507\" returns successfully" May 13 00:04:05.372201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507-rootfs.mount: Deactivated successfully. May 13 00:04:05.377105 containerd[1470]: time="2025-05-13T00:04:05.377031261Z" level=info msg="shim disconnected" id=1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507 namespace=k8s.io May 13 00:04:05.377195 containerd[1470]: time="2025-05-13T00:04:05.377108868Z" level=warning msg="cleaning up after shim disconnected" id=1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507 namespace=k8s.io May 13 00:04:05.377195 containerd[1470]: time="2025-05-13T00:04:05.377120505Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:04:06.111115 kubelet[1770]: E0513 00:04:06.111070 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:06.286781 kubelet[1770]: E0513 00:04:06.286736 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:06.289007 containerd[1470]: time="2025-05-13T00:04:06.288841702Z" level=info msg="CreateContainer within sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:04:06.304814 containerd[1470]: time="2025-05-13T00:04:06.304755534Z" level=info msg="CreateContainer within sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\"" May 13 00:04:06.305447 containerd[1470]: time="2025-05-13T00:04:06.305417216Z" level=info msg="StartContainer for \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\"" May 13 00:04:06.336317 systemd[1]: Started cri-containerd-8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d.scope - libcontainer container 8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d. May 13 00:04:06.363441 containerd[1470]: time="2025-05-13T00:04:06.363280898Z" level=info msg="StartContainer for \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\" returns successfully" May 13 00:04:06.459350 kubelet[1770]: I0513 00:04:06.459314 1770 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 00:04:06.894127 kernel: Initializing XFRM netlink socket May 13 00:04:07.111457 kubelet[1770]: E0513 00:04:07.111401 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:07.291014 kubelet[1770]: E0513 00:04:07.290872 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:07.306969 kubelet[1770]: I0513 00:04:07.306906 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q2cl9" podStartSLOduration=7.868742081 podStartE2EDuration="12.306888928s" podCreationTimestamp="2025-05-13 00:03:55 +0000 UTC" firstStartedPulling="2025-05-13 00:03:58.310884513 +0000 UTC m=+4.365339905" lastFinishedPulling="2025-05-13 00:04:02.7490314 +0000 UTC m=+8.803486752" observedRunningTime="2025-05-13 00:04:07.306604234 +0000 UTC m=+13.361059626" watchObservedRunningTime="2025-05-13 00:04:07.306888928 +0000 UTC m=+13.361344320" May 13 00:04:08.111922 kubelet[1770]: E0513 00:04:08.111861 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:08.292545 kubelet[1770]: E0513 00:04:08.292510 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:08.515915 systemd-networkd[1392]: cilium_host: Link UP May 13 00:04:08.517266 systemd-networkd[1392]: cilium_net: Link UP May 13 00:04:08.517536 systemd-networkd[1392]: cilium_net: Gained carrier May 13 00:04:08.517670 systemd-networkd[1392]: cilium_host: Gained carrier May 13 00:04:08.517770 systemd-networkd[1392]: cilium_net: Gained IPv6LL May 13 00:04:08.517893 systemd-networkd[1392]: cilium_host: Gained IPv6LL May 13 00:04:08.594235 systemd-networkd[1392]: cilium_vxlan: Link UP May 13 00:04:08.594240 systemd-networkd[1392]: cilium_vxlan: Gained carrier May 13 00:04:08.880130 kernel: NET: Registered PF_ALG protocol family May 13 00:04:09.112903 kubelet[1770]: E0513 00:04:09.112835 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:09.293863 kubelet[1770]: E0513 00:04:09.293466 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:09.424712 systemd-networkd[1392]: lxc_health: Link UP May 13 00:04:09.442254 systemd-networkd[1392]: lxc_health: Gained carrier May 13 00:04:09.657369 systemd-networkd[1392]: cilium_vxlan: Gained IPv6LL May 13 00:04:10.113008 kubelet[1770]: E0513 00:04:10.112962 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:10.294811 kubelet[1770]: E0513 00:04:10.294765 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:10.745340 systemd-networkd[1392]: lxc_health: Gained IPv6LL May 13 00:04:11.113644 kubelet[1770]: E0513 00:04:11.113579 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:11.651951 systemd[1]: Created slice kubepods-besteffort-pod72f61db6_b009_4b26_a0f4_40f9e23a9711.slice - libcontainer container kubepods-besteffort-pod72f61db6_b009_4b26_a0f4_40f9e23a9711.slice. May 13 00:04:11.725526 kubelet[1770]: I0513 00:04:11.725450 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdn82\" (UniqueName: \"kubernetes.io/projected/72f61db6-b009-4b26-a0f4-40f9e23a9711-kube-api-access-mdn82\") pod \"nginx-deployment-7fcdb87857-4b999\" (UID: \"72f61db6-b009-4b26-a0f4-40f9e23a9711\") " pod="default/nginx-deployment-7fcdb87857-4b999" May 13 00:04:11.956768 containerd[1470]: time="2025-05-13T00:04:11.956604263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4b999,Uid:72f61db6-b009-4b26-a0f4-40f9e23a9711,Namespace:default,Attempt:0,}" May 13 00:04:12.048673 systemd-networkd[1392]: lxcacd309163cb8: Link UP May 13 00:04:12.051137 kernel: eth0: renamed from tmp21623 May 13 00:04:12.056685 systemd-networkd[1392]: lxcacd309163cb8: Gained carrier May 13 00:04:12.113948 kubelet[1770]: E0513 00:04:12.113883 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:13.114783 kubelet[1770]: E0513 00:04:13.114735 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:13.817255 systemd-networkd[1392]: lxcacd309163cb8: Gained IPv6LL May 13 00:04:13.934649 containerd[1470]: time="2025-05-13T00:04:13.934548239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:04:13.934649 containerd[1470]: time="2025-05-13T00:04:13.934605662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:04:13.934649 containerd[1470]: time="2025-05-13T00:04:13.934620999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:13.935070 containerd[1470]: time="2025-05-13T00:04:13.934698764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:13.954284 systemd[1]: Started cri-containerd-21623fba839ead3b495ff3f73e1c81fe77a948ab9b76887f7fc94abdd2d4a8f9.scope - libcontainer container 21623fba839ead3b495ff3f73e1c81fe77a948ab9b76887f7fc94abdd2d4a8f9. May 13 00:04:13.963258 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:04:13.979929 containerd[1470]: time="2025-05-13T00:04:13.979888661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4b999,Uid:72f61db6-b009-4b26-a0f4-40f9e23a9711,Namespace:default,Attempt:0,} returns sandbox id \"21623fba839ead3b495ff3f73e1c81fe77a948ab9b76887f7fc94abdd2d4a8f9\"" May 13 00:04:13.981335 containerd[1470]: time="2025-05-13T00:04:13.981298165Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:04:14.116253 kubelet[1770]: E0513 00:04:14.115846 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:15.105451 kubelet[1770]: E0513 00:04:15.105408 1770 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:15.116985 kubelet[1770]: E0513 00:04:15.116951 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:15.636090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534096698.mount: Deactivated successfully. May 13 00:04:16.117504 kubelet[1770]: E0513 00:04:16.117456 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:16.435643 containerd[1470]: time="2025-05-13T00:04:16.435414507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:16.436605 containerd[1470]: time="2025-05-13T00:04:16.436541614Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948859" May 13 00:04:16.437417 containerd[1470]: time="2025-05-13T00:04:16.437384352Z" level=info msg="ImageCreate event name:\"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:16.440248 containerd[1470]: time="2025-05-13T00:04:16.440191493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:16.442119 containerd[1470]: time="2025-05-13T00:04:16.441330729Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 2.459988559s" May 13 00:04:16.442119 containerd[1470]: time="2025-05-13T00:04:16.441364834Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 00:04:16.443540 containerd[1470]: time="2025-05-13T00:04:16.443510569Z" level=info msg="CreateContainer within sandbox \"21623fba839ead3b495ff3f73e1c81fe77a948ab9b76887f7fc94abdd2d4a8f9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 13 00:04:16.454078 containerd[1470]: time="2025-05-13T00:04:16.454032533Z" level=info msg="CreateContainer within sandbox \"21623fba839ead3b495ff3f73e1c81fe77a948ab9b76887f7fc94abdd2d4a8f9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b2a057f734cd26d8c5590b4fd24a84bd3d6b440785fb29273ff57bf43373af1b\"" May 13 00:04:16.454678 containerd[1470]: time="2025-05-13T00:04:16.454649746Z" level=info msg="StartContainer for \"b2a057f734cd26d8c5590b4fd24a84bd3d6b440785fb29273ff57bf43373af1b\"" May 13 00:04:16.483265 systemd[1]: Started cri-containerd-b2a057f734cd26d8c5590b4fd24a84bd3d6b440785fb29273ff57bf43373af1b.scope - libcontainer container b2a057f734cd26d8c5590b4fd24a84bd3d6b440785fb29273ff57bf43373af1b. May 13 00:04:16.510071 containerd[1470]: time="2025-05-13T00:04:16.510014666Z" level=info msg="StartContainer for \"b2a057f734cd26d8c5590b4fd24a84bd3d6b440785fb29273ff57bf43373af1b\" returns successfully" May 13 00:04:17.118430 kubelet[1770]: E0513 00:04:17.118375 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:17.316702 kubelet[1770]: I0513 00:04:17.316632 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-4b999" podStartSLOduration=3.854966512 podStartE2EDuration="6.316612918s" podCreationTimestamp="2025-05-13 00:04:11 +0000 UTC" firstStartedPulling="2025-05-13 00:04:13.980698228 +0000 UTC m=+20.035153620" lastFinishedPulling="2025-05-13 00:04:16.442344634 +0000 UTC m=+22.496800026" observedRunningTime="2025-05-13 00:04:17.31610988 +0000 UTC m=+23.370565272" watchObservedRunningTime="2025-05-13 00:04:17.316612918 +0000 UTC m=+23.371068270" May 13 00:04:18.119245 kubelet[1770]: E0513 00:04:18.119179 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:18.644815 kubelet[1770]: I0513 00:04:18.644743 1770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:04:18.645234 kubelet[1770]: E0513 00:04:18.645214 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:19.119828 kubelet[1770]: E0513 00:04:19.119781 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:19.311679 kubelet[1770]: E0513 00:04:19.311642 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:20.120812 kubelet[1770]: E0513 00:04:20.120766 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:21.121572 kubelet[1770]: E0513 00:04:21.121528 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:22.122014 kubelet[1770]: E0513 00:04:22.121964 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:23.122280 kubelet[1770]: E0513 00:04:23.122221 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:23.411825 systemd[1]: Created slice kubepods-besteffort-podc0f9985d_2e70_4729_9b58_ffda58fda85c.slice - libcontainer container kubepods-besteffort-podc0f9985d_2e70_4729_9b58_ffda58fda85c.slice. May 13 00:04:23.495124 kubelet[1770]: I0513 00:04:23.494997 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfb54\" (UniqueName: \"kubernetes.io/projected/c0f9985d-2e70-4729-9b58-ffda58fda85c-kube-api-access-mfb54\") pod \"nfs-server-provisioner-0\" (UID: \"c0f9985d-2e70-4729-9b58-ffda58fda85c\") " pod="default/nfs-server-provisioner-0" May 13 00:04:23.495124 kubelet[1770]: I0513 00:04:23.495049 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c0f9985d-2e70-4729-9b58-ffda58fda85c-data\") pod \"nfs-server-provisioner-0\" (UID: \"c0f9985d-2e70-4729-9b58-ffda58fda85c\") " pod="default/nfs-server-provisioner-0" May 13 00:04:23.714400 containerd[1470]: time="2025-05-13T00:04:23.714296101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c0f9985d-2e70-4729-9b58-ffda58fda85c,Namespace:default,Attempt:0,}" May 13 00:04:23.747388 systemd-networkd[1392]: lxcaadd6dbd5dfc: Link UP May 13 00:04:23.760190 kernel: eth0: renamed from tmpd4627 May 13 00:04:23.765959 systemd-networkd[1392]: lxcaadd6dbd5dfc: Gained carrier May 13 00:04:23.947454 containerd[1470]: time="2025-05-13T00:04:23.947367176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:04:23.947454 containerd[1470]: time="2025-05-13T00:04:23.947417490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:04:23.947454 containerd[1470]: time="2025-05-13T00:04:23.947429378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:23.947617 containerd[1470]: time="2025-05-13T00:04:23.947501587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:23.967298 systemd[1]: Started cri-containerd-d4627f366db2d80b96dc578ea2d11db5641966c2587013be0dd2c8b02b60c462.scope - libcontainer container d4627f366db2d80b96dc578ea2d11db5641966c2587013be0dd2c8b02b60c462. May 13 00:04:23.979766 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:04:23.995740 containerd[1470]: time="2025-05-13T00:04:23.995705970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c0f9985d-2e70-4729-9b58-ffda58fda85c,Namespace:default,Attempt:0,} returns sandbox id \"d4627f366db2d80b96dc578ea2d11db5641966c2587013be0dd2c8b02b60c462\"" May 13 00:04:23.998617 containerd[1470]: time="2025-05-13T00:04:23.998426378Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 13 00:04:24.123315 kubelet[1770]: E0513 00:04:24.123271 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:25.123927 kubelet[1770]: E0513 00:04:25.123874 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:25.209265 systemd-networkd[1392]: lxcaadd6dbd5dfc: Gained IPv6LL May 13 00:04:26.124432 kubelet[1770]: E0513 00:04:26.124352 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:27.125164 kubelet[1770]: E0513 00:04:27.125069 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:28.125930 kubelet[1770]: E0513 00:04:28.125850 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:28.400151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654588335.mount: Deactivated successfully. May 13 00:04:29.127711 kubelet[1770]: E0513 00:04:29.127656 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:29.836990 containerd[1470]: time="2025-05-13T00:04:29.836936348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:29.838081 containerd[1470]: time="2025-05-13T00:04:29.837450362Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" May 13 00:04:29.838607 containerd[1470]: time="2025-05-13T00:04:29.838579121Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:29.842048 containerd[1470]: time="2025-05-13T00:04:29.842010340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:29.843926 containerd[1470]: time="2025-05-13T00:04:29.843883228Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.845064024s" May 13 00:04:29.843926 containerd[1470]: time="2025-05-13T00:04:29.843925088Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 13 00:04:29.845976 containerd[1470]: time="2025-05-13T00:04:29.845917915Z" level=info msg="CreateContainer within sandbox \"d4627f366db2d80b96dc578ea2d11db5641966c2587013be0dd2c8b02b60c462\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 13 00:04:29.858524 containerd[1470]: time="2025-05-13T00:04:29.858446679Z" level=info msg="CreateContainer within sandbox \"d4627f366db2d80b96dc578ea2d11db5641966c2587013be0dd2c8b02b60c462\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d770732334f7ae92ad52f4007c3dee19f7e213c7a533eb85bb7fc1297044aaa2\"" May 13 00:04:29.859019 containerd[1470]: time="2025-05-13T00:04:29.858959493Z" level=info msg="StartContainer for \"d770732334f7ae92ad52f4007c3dee19f7e213c7a533eb85bb7fc1297044aaa2\"" May 13 00:04:29.947302 systemd[1]: Started cri-containerd-d770732334f7ae92ad52f4007c3dee19f7e213c7a533eb85bb7fc1297044aaa2.scope - libcontainer container d770732334f7ae92ad52f4007c3dee19f7e213c7a533eb85bb7fc1297044aaa2. May 13 00:04:30.025187 containerd[1470]: time="2025-05-13T00:04:30.023863995Z" level=info msg="StartContainer for \"d770732334f7ae92ad52f4007c3dee19f7e213c7a533eb85bb7fc1297044aaa2\" returns successfully" May 13 00:04:30.074228 update_engine[1455]: I20250513 00:04:30.074159 1455 update_attempter.cc:509] Updating boot flags... May 13 00:04:30.116224 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3140) May 13 00:04:30.135151 kubelet[1770]: E0513 00:04:30.135059 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:30.143335 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3141) May 13 00:04:30.185350 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3141) May 13 00:04:30.342264 kubelet[1770]: I0513 00:04:30.342195 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.495491589 podStartE2EDuration="7.342178469s" podCreationTimestamp="2025-05-13 00:04:23 +0000 UTC" firstStartedPulling="2025-05-13 00:04:23.997875884 +0000 UTC m=+30.052331276" lastFinishedPulling="2025-05-13 00:04:29.844562764 +0000 UTC m=+35.899018156" observedRunningTime="2025-05-13 00:04:30.341912184 +0000 UTC m=+36.396367536" watchObservedRunningTime="2025-05-13 00:04:30.342178469 +0000 UTC m=+36.396633861" May 13 00:04:31.139291 kubelet[1770]: E0513 00:04:31.135694 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:32.136345 kubelet[1770]: E0513 00:04:32.136286 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:33.136892 kubelet[1770]: E0513 00:04:33.136846 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:34.137690 kubelet[1770]: E0513 00:04:34.137641 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:35.105399 kubelet[1770]: E0513 00:04:35.105351 1770 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:35.137964 kubelet[1770]: E0513 00:04:35.137914 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:36.138053 kubelet[1770]: E0513 00:04:36.138004 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:37.138293 kubelet[1770]: E0513 00:04:37.138242 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:38.139291 kubelet[1770]: E0513 00:04:38.139242 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:39.140010 kubelet[1770]: E0513 00:04:39.139961 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:40.140890 kubelet[1770]: E0513 00:04:40.140846 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:40.149252 systemd[1]: Created slice kubepods-besteffort-pod536c6255_b28f_469f_aad1_eea18b1f122e.slice - libcontainer container kubepods-besteffort-pod536c6255_b28f_469f_aad1_eea18b1f122e.slice. May 13 00:04:40.199490 kubelet[1770]: I0513 00:04:40.199452 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkg47\" (UniqueName: \"kubernetes.io/projected/536c6255-b28f-469f-aad1-eea18b1f122e-kube-api-access-mkg47\") pod \"test-pod-1\" (UID: \"536c6255-b28f-469f-aad1-eea18b1f122e\") " pod="default/test-pod-1" May 13 00:04:40.199490 kubelet[1770]: I0513 00:04:40.199494 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5afd0a2e-c0bd-40f8-93ff-a1d4ee97c3d9\" (UniqueName: \"kubernetes.io/nfs/536c6255-b28f-469f-aad1-eea18b1f122e-pvc-5afd0a2e-c0bd-40f8-93ff-a1d4ee97c3d9\") pod \"test-pod-1\" (UID: \"536c6255-b28f-469f-aad1-eea18b1f122e\") " pod="default/test-pod-1" May 13 00:04:40.327127 kernel: FS-Cache: Loaded May 13 00:04:40.353259 kernel: RPC: Registered named UNIX socket transport module. May 13 00:04:40.353360 kernel: RPC: Registered udp transport module. May 13 00:04:40.353390 kernel: RPC: Registered tcp transport module. May 13 00:04:40.354181 kernel: RPC: Registered tcp-with-tls transport module. May 13 00:04:40.354210 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 13 00:04:40.534191 kernel: NFS: Registering the id_resolver key type May 13 00:04:40.534289 kernel: Key type id_resolver registered May 13 00:04:40.534307 kernel: Key type id_legacy registered May 13 00:04:40.563124 nfsidmap[3189]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:04:40.567261 nfsidmap[3192]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:04:40.752594 containerd[1470]: time="2025-05-13T00:04:40.752532919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:536c6255-b28f-469f-aad1-eea18b1f122e,Namespace:default,Attempt:0,}" May 13 00:04:40.798407 systemd-networkd[1392]: lxcd61c276e4449: Link UP May 13 00:04:40.812144 kernel: eth0: renamed from tmpd5269 May 13 00:04:40.826555 systemd-networkd[1392]: lxcd61c276e4449: Gained carrier May 13 00:04:41.020760 containerd[1470]: time="2025-05-13T00:04:41.020663380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:04:41.020760 containerd[1470]: time="2025-05-13T00:04:41.020723077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:04:41.020760 containerd[1470]: time="2025-05-13T00:04:41.020735600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:41.021059 containerd[1470]: time="2025-05-13T00:04:41.020809661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:41.041295 systemd[1]: Started cri-containerd-d52699c5064812ff8ff3dc324295d279d5e4dee0e656b9c7e46e26d065f88dae.scope - libcontainer container d52699c5064812ff8ff3dc324295d279d5e4dee0e656b9c7e46e26d065f88dae. May 13 00:04:41.054967 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:04:41.087195 containerd[1470]: time="2025-05-13T00:04:41.086634878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:536c6255-b28f-469f-aad1-eea18b1f122e,Namespace:default,Attempt:0,} returns sandbox id \"d52699c5064812ff8ff3dc324295d279d5e4dee0e656b9c7e46e26d065f88dae\"" May 13 00:04:41.088385 containerd[1470]: time="2025-05-13T00:04:41.088176278Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:04:41.141287 kubelet[1770]: E0513 00:04:41.141245 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:41.395932 containerd[1470]: time="2025-05-13T00:04:41.395669231Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:41.398873 containerd[1470]: time="2025-05-13T00:04:41.397338067Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 13 00:04:41.400587 containerd[1470]: time="2025-05-13T00:04:41.400456916Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 312.245829ms" May 13 00:04:41.400587 containerd[1470]: time="2025-05-13T00:04:41.400498128Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 00:04:41.402785 containerd[1470]: time="2025-05-13T00:04:41.402732085Z" level=info msg="CreateContainer within sandbox \"d52699c5064812ff8ff3dc324295d279d5e4dee0e656b9c7e46e26d065f88dae\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 13 00:04:41.417489 containerd[1470]: time="2025-05-13T00:04:41.417446923Z" level=info msg="CreateContainer within sandbox \"d52699c5064812ff8ff3dc324295d279d5e4dee0e656b9c7e46e26d065f88dae\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1063e5bc36ffa8da973f8f3f058ea41891929eba9c8233bae93a673ddb813003\"" May 13 00:04:41.418970 containerd[1470]: time="2025-05-13T00:04:41.418183933Z" level=info msg="StartContainer for \"1063e5bc36ffa8da973f8f3f058ea41891929eba9c8233bae93a673ddb813003\"" May 13 00:04:41.445291 systemd[1]: Started cri-containerd-1063e5bc36ffa8da973f8f3f058ea41891929eba9c8233bae93a673ddb813003.scope - libcontainer container 1063e5bc36ffa8da973f8f3f058ea41891929eba9c8233bae93a673ddb813003. May 13 00:04:41.471631 containerd[1470]: time="2025-05-13T00:04:41.471187252Z" level=info msg="StartContainer for \"1063e5bc36ffa8da973f8f3f058ea41891929eba9c8233bae93a673ddb813003\" returns successfully" May 13 00:04:41.977234 systemd-networkd[1392]: lxcd61c276e4449: Gained IPv6LL May 13 00:04:42.142754 kubelet[1770]: E0513 00:04:42.142683 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:42.375672 kubelet[1770]: I0513 00:04:42.375596 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.062291277 podStartE2EDuration="19.375580564s" podCreationTimestamp="2025-05-13 00:04:23 +0000 UTC" firstStartedPulling="2025-05-13 00:04:41.087883594 +0000 UTC m=+47.142338946" lastFinishedPulling="2025-05-13 00:04:41.401172841 +0000 UTC m=+47.455628233" observedRunningTime="2025-05-13 00:04:42.375498141 +0000 UTC m=+48.429953533" watchObservedRunningTime="2025-05-13 00:04:42.375580564 +0000 UTC m=+48.430035956" May 13 00:04:43.143862 kubelet[1770]: E0513 00:04:43.143786 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:44.144157 kubelet[1770]: E0513 00:04:44.144113 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:45.144451 kubelet[1770]: E0513 00:04:45.144401 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:46.144727 kubelet[1770]: E0513 00:04:46.144635 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:46.301529 containerd[1470]: time="2025-05-13T00:04:46.301478297Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:04:46.307787 containerd[1470]: time="2025-05-13T00:04:46.307728691Z" level=info msg="StopContainer for \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\" with timeout 2 (s)" May 13 00:04:46.308084 containerd[1470]: time="2025-05-13T00:04:46.308054807Z" level=info msg="Stop container \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\" with signal terminated" May 13 00:04:46.318985 systemd-networkd[1392]: lxc_health: Link DOWN May 13 00:04:46.318990 systemd-networkd[1392]: lxc_health: Lost carrier May 13 00:04:46.350484 systemd[1]: cri-containerd-8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d.scope: Deactivated successfully. May 13 00:04:46.350794 systemd[1]: cri-containerd-8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d.scope: Consumed 6.475s CPU time. May 13 00:04:46.375994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d-rootfs.mount: Deactivated successfully. May 13 00:04:46.392859 containerd[1470]: time="2025-05-13T00:04:46.392794303Z" level=info msg="shim disconnected" id=8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d namespace=k8s.io May 13 00:04:46.392859 containerd[1470]: time="2025-05-13T00:04:46.392853117Z" level=warning msg="cleaning up after shim disconnected" id=8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d namespace=k8s.io May 13 00:04:46.392859 containerd[1470]: time="2025-05-13T00:04:46.392864600Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:04:46.407197 containerd[1470]: time="2025-05-13T00:04:46.406979967Z" level=info msg="StopContainer for \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\" returns successfully" May 13 00:04:46.408251 containerd[1470]: time="2025-05-13T00:04:46.408205096Z" level=info msg="StopPodSandbox for \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\"" May 13 00:04:46.413756 containerd[1470]: time="2025-05-13T00:04:46.413697591Z" level=info msg="Container to stop \"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:04:46.413756 containerd[1470]: time="2025-05-13T00:04:46.413747162Z" level=info msg="Container to stop \"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:04:46.413756 containerd[1470]: time="2025-05-13T00:04:46.413758565Z" level=info msg="Container to stop \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:04:46.413756 containerd[1470]: time="2025-05-13T00:04:46.413768807Z" level=info msg="Container to stop \"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:04:46.413951 containerd[1470]: time="2025-05-13T00:04:46.413778450Z" level=info msg="Container to stop \"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:04:46.415377 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2-shm.mount: Deactivated successfully. May 13 00:04:46.419545 systemd[1]: cri-containerd-0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2.scope: Deactivated successfully. May 13 00:04:46.435196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2-rootfs.mount: Deactivated successfully. May 13 00:04:46.445724 containerd[1470]: time="2025-05-13T00:04:46.445682010Z" level=info msg="shim disconnected" id=0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2 namespace=k8s.io May 13 00:04:46.445724 containerd[1470]: time="2025-05-13T00:04:46.445723220Z" level=warning msg="cleaning up after shim disconnected" id=0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2 namespace=k8s.io May 13 00:04:46.445724 containerd[1470]: time="2025-05-13T00:04:46.445734263Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:04:46.458038 containerd[1470]: time="2025-05-13T00:04:46.457978949Z" level=info msg="TearDown network for sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" successfully" May 13 00:04:46.458038 containerd[1470]: time="2025-05-13T00:04:46.458015758Z" level=info msg="StopPodSandbox for \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" returns successfully" May 13 00:04:46.639922 kubelet[1770]: I0513 00:04:46.639867 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-host-proc-sys-net\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.639922 kubelet[1770]: I0513 00:04:46.639916 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-bpf-maps\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640184 kubelet[1770]: I0513 00:04:46.639942 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72015ef3-d003-4160-9c2b-66e1890cf82c-clustermesh-secrets\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640184 kubelet[1770]: I0513 00:04:46.639961 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-cgroup\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640184 kubelet[1770]: I0513 00:04:46.639977 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cni-path\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640184 kubelet[1770]: I0513 00:04:46.639992 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-etc-cni-netd\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640184 kubelet[1770]: I0513 00:04:46.640006 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-lib-modules\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640184 kubelet[1770]: I0513 00:04:46.640019 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-hostproc\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640385 kubelet[1770]: I0513 00:04:46.640038 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tslvp\" (UniqueName: \"kubernetes.io/projected/72015ef3-d003-4160-9c2b-66e1890cf82c-kube-api-access-tslvp\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640385 kubelet[1770]: I0513 00:04:46.640053 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-xtables-lock\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640385 kubelet[1770]: I0513 00:04:46.640069 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-config-path\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640385 kubelet[1770]: I0513 00:04:46.640083 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-run\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640385 kubelet[1770]: I0513 00:04:46.640124 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-host-proc-sys-kernel\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640385 kubelet[1770]: I0513 00:04:46.640144 1770 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72015ef3-d003-4160-9c2b-66e1890cf82c-hubble-tls\") pod \"72015ef3-d003-4160-9c2b-66e1890cf82c\" (UID: \"72015ef3-d003-4160-9c2b-66e1890cf82c\") " May 13 00:04:46.640994 kubelet[1770]: I0513 00:04:46.640481 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:04:46.640994 kubelet[1770]: I0513 00:04:46.640535 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:04:46.640994 kubelet[1770]: I0513 00:04:46.640552 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cni-path" (OuterVolumeSpecName: "cni-path") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:04:46.640994 kubelet[1770]: I0513 00:04:46.640566 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:04:46.640994 kubelet[1770]: I0513 00:04:46.640580 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:04:46.641164 kubelet[1770]: I0513 00:04:46.640595 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:04:46.641164 kubelet[1770]: I0513 00:04:46.640609 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:04:46.641164 kubelet[1770]: I0513 00:04:46.640622 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-hostproc" (OuterVolumeSpecName: "hostproc") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:04:46.641164 kubelet[1770]: I0513 00:04:46.640678 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:04:46.642607 kubelet[1770]: I0513 00:04:46.642550 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:04:46.642709 kubelet[1770]: I0513 00:04:46.642635 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:04:46.645976 systemd[1]: var-lib-kubelet-pods-72015ef3\x2dd003\x2d4160\x2d9c2b\x2d66e1890cf82c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:04:46.647181 systemd[1]: var-lib-kubelet-pods-72015ef3\x2dd003\x2d4160\x2d9c2b\x2d66e1890cf82c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:04:46.651937 kubelet[1770]: I0513 00:04:46.651871 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72015ef3-d003-4160-9c2b-66e1890cf82c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:04:46.652089 kubelet[1770]: I0513 00:04:46.652048 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72015ef3-d003-4160-9c2b-66e1890cf82c-kube-api-access-tslvp" (OuterVolumeSpecName: "kube-api-access-tslvp") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "kube-api-access-tslvp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:04:46.652180 kubelet[1770]: I0513 00:04:46.652153 1770 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72015ef3-d003-4160-9c2b-66e1890cf82c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "72015ef3-d003-4160-9c2b-66e1890cf82c" (UID: "72015ef3-d003-4160-9c2b-66e1890cf82c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:04:46.741179 kubelet[1770]: I0513 00:04:46.741052 1770 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-host-proc-sys-kernel\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741179 kubelet[1770]: I0513 00:04:46.741090 1770 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72015ef3-d003-4160-9c2b-66e1890cf82c-hubble-tls\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741179 kubelet[1770]: I0513 00:04:46.741118 1770 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-run\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741179 kubelet[1770]: I0513 00:04:46.741130 1770 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-bpf-maps\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741179 kubelet[1770]: I0513 00:04:46.741138 1770 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72015ef3-d003-4160-9c2b-66e1890cf82c-clustermesh-secrets\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741179 kubelet[1770]: I0513 00:04:46.741146 1770 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-cgroup\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741179 kubelet[1770]: I0513 00:04:46.741154 1770 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-host-proc-sys-net\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741179 kubelet[1770]: I0513 00:04:46.741162 1770 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-cni-path\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741430 kubelet[1770]: I0513 00:04:46.741171 1770 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-lib-modules\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741430 kubelet[1770]: I0513 00:04:46.741179 1770 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-hostproc\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741430 kubelet[1770]: I0513 00:04:46.741187 1770 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tslvp\" (UniqueName: \"kubernetes.io/projected/72015ef3-d003-4160-9c2b-66e1890cf82c-kube-api-access-tslvp\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741430 kubelet[1770]: I0513 00:04:46.741196 1770 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-xtables-lock\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741430 kubelet[1770]: I0513 00:04:46.741206 1770 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72015ef3-d003-4160-9c2b-66e1890cf82c-cilium-config-path\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:46.741430 kubelet[1770]: I0513 00:04:46.741213 1770 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72015ef3-d003-4160-9c2b-66e1890cf82c-etc-cni-netd\") on node \"10.0.0.133\" DevicePath \"\"" May 13 00:04:47.145175 kubelet[1770]: E0513 00:04:47.145131 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:47.258928 systemd[1]: Removed slice kubepods-burstable-pod72015ef3_d003_4160_9c2b_66e1890cf82c.slice - libcontainer container kubepods-burstable-pod72015ef3_d003_4160_9c2b_66e1890cf82c.slice. May 13 00:04:47.259141 systemd[1]: kubepods-burstable-pod72015ef3_d003_4160_9c2b_66e1890cf82c.slice: Consumed 6.625s CPU time. May 13 00:04:47.274690 systemd[1]: var-lib-kubelet-pods-72015ef3\x2dd003\x2d4160\x2d9c2b\x2d66e1890cf82c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtslvp.mount: Deactivated successfully. May 13 00:04:47.366791 kubelet[1770]: I0513 00:04:47.366768 1770 scope.go:117] "RemoveContainer" containerID="8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d" May 13 00:04:47.368060 containerd[1470]: time="2025-05-13T00:04:47.368029734Z" level=info msg="RemoveContainer for \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\"" May 13 00:04:47.371069 containerd[1470]: time="2025-05-13T00:04:47.371038459Z" level=info msg="RemoveContainer for \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\" returns successfully" May 13 00:04:47.371331 kubelet[1770]: I0513 00:04:47.371310 1770 scope.go:117] "RemoveContainer" containerID="1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507" May 13 00:04:47.372433 containerd[1470]: time="2025-05-13T00:04:47.372411931Z" level=info msg="RemoveContainer for \"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507\"" May 13 00:04:47.374554 containerd[1470]: time="2025-05-13T00:04:47.374528893Z" level=info msg="RemoveContainer for \"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507\" returns successfully" May 13 00:04:47.374787 kubelet[1770]: I0513 00:04:47.374698 1770 scope.go:117] "RemoveContainer" containerID="37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7" May 13 00:04:47.375920 containerd[1470]: time="2025-05-13T00:04:47.375896925Z" level=info msg="RemoveContainer for \"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7\"" May 13 00:04:47.387026 containerd[1470]: time="2025-05-13T00:04:47.386959042Z" level=info msg="RemoveContainer for \"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7\" returns successfully" May 13 00:04:47.387278 kubelet[1770]: I0513 00:04:47.387244 1770 scope.go:117] "RemoveContainer" containerID="4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292" May 13 00:04:47.391008 containerd[1470]: time="2025-05-13T00:04:47.390970635Z" level=info msg="RemoveContainer for \"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292\"" May 13 00:04:47.393053 containerd[1470]: time="2025-05-13T00:04:47.393015181Z" level=info msg="RemoveContainer for \"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292\" returns successfully" May 13 00:04:47.393234 kubelet[1770]: I0513 00:04:47.393195 1770 scope.go:117] "RemoveContainer" containerID="0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1" May 13 00:04:47.394193 containerd[1470]: time="2025-05-13T00:04:47.394163962Z" level=info msg="RemoveContainer for \"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1\"" May 13 00:04:47.396409 containerd[1470]: time="2025-05-13T00:04:47.396329735Z" level=info msg="RemoveContainer for \"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1\" returns successfully" May 13 00:04:47.396554 kubelet[1770]: I0513 00:04:47.396521 1770 scope.go:117] "RemoveContainer" containerID="8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d" May 13 00:04:47.396794 containerd[1470]: time="2025-05-13T00:04:47.396740429Z" level=error msg="ContainerStatus for \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\": not found" May 13 00:04:47.397914 kubelet[1770]: E0513 00:04:47.397883 1770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\": not found" containerID="8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d" May 13 00:04:47.397986 kubelet[1770]: I0513 00:04:47.397918 1770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d"} err="failed to get container status \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8647800aeaa6d66205083e361f7088113d4c51f7e5422b438627a4dd321c477d\": not found" May 13 00:04:47.397986 kubelet[1770]: I0513 00:04:47.397959 1770 scope.go:117] "RemoveContainer" containerID="1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507" May 13 00:04:47.398211 containerd[1470]: time="2025-05-13T00:04:47.398169354Z" level=error msg="ContainerStatus for \"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507\": not found" May 13 00:04:47.398385 kubelet[1770]: E0513 00:04:47.398300 1770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507\": not found" containerID="1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507" May 13 00:04:47.398540 kubelet[1770]: I0513 00:04:47.398432 1770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507"} err="failed to get container status \"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d3f5d566f5402b4b12e1018e443f40cfd47133b49b9411e4724329219316507\": not found" May 13 00:04:47.398540 kubelet[1770]: I0513 00:04:47.398460 1770 scope.go:117] "RemoveContainer" containerID="37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7" May 13 00:04:47.398672 containerd[1470]: time="2025-05-13T00:04:47.398639541Z" level=error msg="ContainerStatus for \"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7\": not found" May 13 00:04:47.398781 kubelet[1770]: E0513 00:04:47.398760 1770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7\": not found" containerID="37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7" May 13 00:04:47.398821 kubelet[1770]: I0513 00:04:47.398787 1770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7"} err="failed to get container status \"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"37af791121039b440a089e7366a8bc080ce238873db4eaf95d238b9eafb0d1c7\": not found" May 13 00:04:47.398821 kubelet[1770]: I0513 00:04:47.398804 1770 scope.go:117] "RemoveContainer" containerID="4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292" May 13 00:04:47.399020 containerd[1470]: time="2025-05-13T00:04:47.398987540Z" level=error msg="ContainerStatus for \"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292\": not found" May 13 00:04:47.399338 kubelet[1770]: E0513 00:04:47.399202 1770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292\": not found" containerID="4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292" May 13 00:04:47.399338 kubelet[1770]: I0513 00:04:47.399247 1770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292"} err="failed to get container status \"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c10f9bb3ade7cf253fa9aeb48924b1a27e0bfae9a5b0cb6e834d40f4e0aa292\": not found" May 13 00:04:47.399338 kubelet[1770]: I0513 00:04:47.399263 1770 scope.go:117] "RemoveContainer" containerID="0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1" May 13 00:04:47.399442 containerd[1470]: time="2025-05-13T00:04:47.399414197Z" level=error msg="ContainerStatus for \"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1\": not found" May 13 00:04:47.399590 kubelet[1770]: E0513 00:04:47.399529 1770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1\": not found" containerID="0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1" May 13 00:04:47.399590 kubelet[1770]: I0513 00:04:47.399554 1770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1"} err="failed to get container status \"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"0bd6bcf24fe32d7e7faf79cdd2bd791c3051d4b5065b1dcca81fd97552184cd1\": not found" May 13 00:04:48.145873 kubelet[1770]: E0513 00:04:48.145821 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:49.146343 kubelet[1770]: E0513 00:04:49.146295 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:49.255228 kubelet[1770]: I0513 00:04:49.255183 1770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72015ef3-d003-4160-9c2b-66e1890cf82c" path="/var/lib/kubelet/pods/72015ef3-d003-4160-9c2b-66e1890cf82c/volumes" May 13 00:04:49.917109 kubelet[1770]: I0513 00:04:49.917065 1770 memory_manager.go:355] "RemoveStaleState removing state" podUID="72015ef3-d003-4160-9c2b-66e1890cf82c" containerName="cilium-agent" May 13 00:04:49.921732 kubelet[1770]: W0513 00:04:49.921655 1770 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.133' and this object May 13 00:04:49.921732 kubelet[1770]: E0513 00:04:49.921695 1770 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:10.0.0.133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.133' and this object" logger="UnhandledError" May 13 00:04:49.922697 systemd[1]: Created slice kubepods-besteffort-pod5ad502ac_fddf_4f6a_9454_683890c460d5.slice - libcontainer container kubepods-besteffort-pod5ad502ac_fddf_4f6a_9454_683890c460d5.slice. May 13 00:04:49.923579 kubelet[1770]: I0513 00:04:49.923407 1770 status_manager.go:890] "Failed to get status for pod" podUID="5ad502ac-fddf-4f6a-9454-683890c460d5" pod="kube-system/cilium-operator-6c4d7847fc-gqmpp" err="pods \"cilium-operator-6c4d7847fc-gqmpp\" is forbidden: User \"system:node:10.0.0.133\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.133' and this object" May 13 00:04:49.941013 systemd[1]: Created slice kubepods-burstable-pod24c19a16_a1ef_4056_91b8_8aaa552e9d6e.slice - libcontainer container kubepods-burstable-pod24c19a16_a1ef_4056_91b8_8aaa552e9d6e.slice. May 13 00:04:50.060064 kubelet[1770]: I0513 00:04:50.059902 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-cilium-ipsec-secrets\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060064 kubelet[1770]: I0513 00:04:50.059951 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s7lp\" (UniqueName: \"kubernetes.io/projected/5ad502ac-fddf-4f6a-9454-683890c460d5-kube-api-access-8s7lp\") pod \"cilium-operator-6c4d7847fc-gqmpp\" (UID: \"5ad502ac-fddf-4f6a-9454-683890c460d5\") " pod="kube-system/cilium-operator-6c4d7847fc-gqmpp" May 13 00:04:50.060064 kubelet[1770]: I0513 00:04:50.059969 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-etc-cni-netd\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060064 kubelet[1770]: I0513 00:04:50.059985 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-xtables-lock\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060064 kubelet[1770]: I0513 00:04:50.060012 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-hubble-tls\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060311 kubelet[1770]: I0513 00:04:50.060029 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-cilium-cgroup\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060311 kubelet[1770]: I0513 00:04:50.060043 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-cni-path\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060311 kubelet[1770]: I0513 00:04:50.060059 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ad502ac-fddf-4f6a-9454-683890c460d5-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gqmpp\" (UID: \"5ad502ac-fddf-4f6a-9454-683890c460d5\") " pod="kube-system/cilium-operator-6c4d7847fc-gqmpp" May 13 00:04:50.060311 kubelet[1770]: I0513 00:04:50.060077 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-cilium-config-path\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060311 kubelet[1770]: I0513 00:04:50.060106 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-bpf-maps\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060433 kubelet[1770]: I0513 00:04:50.060123 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-hostproc\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060433 kubelet[1770]: I0513 00:04:50.060142 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-lib-modules\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060433 kubelet[1770]: I0513 00:04:50.060162 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-host-proc-sys-net\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060433 kubelet[1770]: I0513 00:04:50.060205 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-cilium-run\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060433 kubelet[1770]: I0513 00:04:50.060237 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-clustermesh-secrets\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060433 kubelet[1770]: I0513 00:04:50.060255 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-host-proc-sys-kernel\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.060559 kubelet[1770]: I0513 00:04:50.060271 1770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfd6c\" (UniqueName: \"kubernetes.io/projected/24c19a16-a1ef-4056-91b8-8aaa552e9d6e-kube-api-access-dfd6c\") pod \"cilium-ggwtv\" (UID: \"24c19a16-a1ef-4056-91b8-8aaa552e9d6e\") " pod="kube-system/cilium-ggwtv" May 13 00:04:50.146686 kubelet[1770]: E0513 00:04:50.146639 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:50.261715 kubelet[1770]: E0513 00:04:50.261598 1770 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:04:51.147012 kubelet[1770]: E0513 00:04:51.146968 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:51.425503 kubelet[1770]: E0513 00:04:51.425379 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:51.426011 containerd[1470]: time="2025-05-13T00:04:51.425962887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gqmpp,Uid:5ad502ac-fddf-4f6a-9454-683890c460d5,Namespace:kube-system,Attempt:0,}" May 13 00:04:51.456569 kubelet[1770]: E0513 00:04:51.456520 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:51.456991 containerd[1470]: time="2025-05-13T00:04:51.456958522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ggwtv,Uid:24c19a16-a1ef-4056-91b8-8aaa552e9d6e,Namespace:kube-system,Attempt:0,}" May 13 00:04:51.472059 containerd[1470]: time="2025-05-13T00:04:51.471393287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:04:51.472059 containerd[1470]: time="2025-05-13T00:04:51.471442136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:04:51.472059 containerd[1470]: time="2025-05-13T00:04:51.471453139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:51.472059 containerd[1470]: time="2025-05-13T00:04:51.471519672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:51.485531 containerd[1470]: time="2025-05-13T00:04:51.484734073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:04:51.485531 containerd[1470]: time="2025-05-13T00:04:51.485179082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:04:51.485531 containerd[1470]: time="2025-05-13T00:04:51.485192285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:51.485531 containerd[1470]: time="2025-05-13T00:04:51.485281622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:51.491290 systemd[1]: Started cri-containerd-3a1c792bdb0aacd9c67619b09a5d90d13115d73afb1e81c36c07f039ab56c6c8.scope - libcontainer container 3a1c792bdb0aacd9c67619b09a5d90d13115d73afb1e81c36c07f039ab56c6c8. May 13 00:04:51.504008 systemd[1]: Started cri-containerd-498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f.scope - libcontainer container 498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f. May 13 00:04:51.529313 containerd[1470]: time="2025-05-13T00:04:51.529260092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ggwtv,Uid:24c19a16-a1ef-4056-91b8-8aaa552e9d6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f\"" May 13 00:04:51.530204 kubelet[1770]: E0513 00:04:51.530180 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:51.530776 containerd[1470]: time="2025-05-13T00:04:51.530736587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gqmpp,Uid:5ad502ac-fddf-4f6a-9454-683890c460d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a1c792bdb0aacd9c67619b09a5d90d13115d73afb1e81c36c07f039ab56c6c8\"" May 13 00:04:51.531408 kubelet[1770]: E0513 00:04:51.531381 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:51.532282 containerd[1470]: time="2025-05-13T00:04:51.532255731Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:04:51.532810 containerd[1470]: time="2025-05-13T00:04:51.532780516Z" level=info msg="CreateContainer within sandbox \"498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:04:51.549521 containerd[1470]: time="2025-05-13T00:04:51.549470211Z" level=info msg="CreateContainer within sandbox \"498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"45fc6a6581f5af893fdb77777af620594cdf74bf2dd6624e7fc0fea8be2c8f1f\"" May 13 00:04:51.553977 containerd[1470]: time="2025-05-13T00:04:51.550076093Z" level=info msg="StartContainer for \"45fc6a6581f5af893fdb77777af620594cdf74bf2dd6624e7fc0fea8be2c8f1f\"" May 13 00:04:51.575286 systemd[1]: Started cri-containerd-45fc6a6581f5af893fdb77777af620594cdf74bf2dd6624e7fc0fea8be2c8f1f.scope - libcontainer container 45fc6a6581f5af893fdb77777af620594cdf74bf2dd6624e7fc0fea8be2c8f1f. May 13 00:04:51.599652 containerd[1470]: time="2025-05-13T00:04:51.599598590Z" level=info msg="StartContainer for \"45fc6a6581f5af893fdb77777af620594cdf74bf2dd6624e7fc0fea8be2c8f1f\" returns successfully" May 13 00:04:51.710048 systemd[1]: cri-containerd-45fc6a6581f5af893fdb77777af620594cdf74bf2dd6624e7fc0fea8be2c8f1f.scope: Deactivated successfully. May 13 00:04:51.749341 containerd[1470]: time="2025-05-13T00:04:51.749279426Z" level=info msg="shim disconnected" id=45fc6a6581f5af893fdb77777af620594cdf74bf2dd6624e7fc0fea8be2c8f1f namespace=k8s.io May 13 00:04:51.749341 containerd[1470]: time="2025-05-13T00:04:51.749337718Z" level=warning msg="cleaning up after shim disconnected" id=45fc6a6581f5af893fdb77777af620594cdf74bf2dd6624e7fc0fea8be2c8f1f namespace=k8s.io May 13 00:04:51.749341 containerd[1470]: time="2025-05-13T00:04:51.749346680Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:04:52.147496 kubelet[1770]: E0513 00:04:52.147443 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:52.378757 kubelet[1770]: E0513 00:04:52.378469 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:52.380363 containerd[1470]: time="2025-05-13T00:04:52.380236825Z" level=info msg="CreateContainer within sandbox \"498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:04:52.389587 containerd[1470]: time="2025-05-13T00:04:52.389538629Z" level=info msg="CreateContainer within sandbox \"498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3fd0b081ca07a0438b562fd5d52d3ca13bf09cc0c49ac141c9d8a02698d8ea10\"" May 13 00:04:52.390902 containerd[1470]: time="2025-05-13T00:04:52.390007600Z" level=info msg="StartContainer for \"3fd0b081ca07a0438b562fd5d52d3ca13bf09cc0c49ac141c9d8a02698d8ea10\"" May 13 00:04:52.416298 systemd[1]: Started cri-containerd-3fd0b081ca07a0438b562fd5d52d3ca13bf09cc0c49ac141c9d8a02698d8ea10.scope - libcontainer container 3fd0b081ca07a0438b562fd5d52d3ca13bf09cc0c49ac141c9d8a02698d8ea10. May 13 00:04:52.437144 containerd[1470]: time="2025-05-13T00:04:52.436986633Z" level=info msg="StartContainer for \"3fd0b081ca07a0438b562fd5d52d3ca13bf09cc0c49ac141c9d8a02698d8ea10\" returns successfully" May 13 00:04:52.457919 systemd[1]: cri-containerd-3fd0b081ca07a0438b562fd5d52d3ca13bf09cc0c49ac141c9d8a02698d8ea10.scope: Deactivated successfully. May 13 00:04:52.475778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fd0b081ca07a0438b562fd5d52d3ca13bf09cc0c49ac141c9d8a02698d8ea10-rootfs.mount: Deactivated successfully. May 13 00:04:52.480145 containerd[1470]: time="2025-05-13T00:04:52.480079632Z" level=info msg="shim disconnected" id=3fd0b081ca07a0438b562fd5d52d3ca13bf09cc0c49ac141c9d8a02698d8ea10 namespace=k8s.io May 13 00:04:52.480145 containerd[1470]: time="2025-05-13T00:04:52.480143324Z" level=warning msg="cleaning up after shim disconnected" id=3fd0b081ca07a0438b562fd5d52d3ca13bf09cc0c49ac141c9d8a02698d8ea10 namespace=k8s.io May 13 00:04:52.480145 containerd[1470]: time="2025-05-13T00:04:52.480152126Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:04:52.875869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1284957458.mount: Deactivated successfully. May 13 00:04:53.147746 kubelet[1770]: E0513 00:04:53.147625 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:53.384193 kubelet[1770]: E0513 00:04:53.383837 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:53.385831 containerd[1470]: time="2025-05-13T00:04:53.385791520Z" level=info msg="CreateContainer within sandbox \"498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:04:53.400497 containerd[1470]: time="2025-05-13T00:04:53.399644090Z" level=info msg="CreateContainer within sandbox \"498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"055d947895d95299181eed6bfc60457af63a2f0a8ad119ab2df413fcdc25df63\"" May 13 00:04:53.400497 containerd[1470]: time="2025-05-13T00:04:53.400440841Z" level=info msg="StartContainer for \"055d947895d95299181eed6bfc60457af63a2f0a8ad119ab2df413fcdc25df63\"" May 13 00:04:53.429236 systemd[1]: Started cri-containerd-055d947895d95299181eed6bfc60457af63a2f0a8ad119ab2df413fcdc25df63.scope - libcontainer container 055d947895d95299181eed6bfc60457af63a2f0a8ad119ab2df413fcdc25df63. May 13 00:04:53.466894 systemd[1]: cri-containerd-055d947895d95299181eed6bfc60457af63a2f0a8ad119ab2df413fcdc25df63.scope: Deactivated successfully. May 13 00:04:53.494114 containerd[1470]: time="2025-05-13T00:04:53.494050242Z" level=info msg="StartContainer for \"055d947895d95299181eed6bfc60457af63a2f0a8ad119ab2df413fcdc25df63\" returns successfully" May 13 00:04:53.497449 containerd[1470]: time="2025-05-13T00:04:53.497277810Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:53.498010 containerd[1470]: time="2025-05-13T00:04:53.497953538Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 13 00:04:53.498575 containerd[1470]: time="2025-05-13T00:04:53.498550650Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:53.500500 containerd[1470]: time="2025-05-13T00:04:53.500472933Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.968181555s" May 13 00:04:53.500537 containerd[1470]: time="2025-05-13T00:04:53.500505459Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 00:04:53.503192 containerd[1470]: time="2025-05-13T00:04:53.503161559Z" level=info msg="CreateContainer within sandbox \"3a1c792bdb0aacd9c67619b09a5d90d13115d73afb1e81c36c07f039ab56c6c8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:04:53.524800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-055d947895d95299181eed6bfc60457af63a2f0a8ad119ab2df413fcdc25df63-rootfs.mount: Deactivated successfully. May 13 00:04:53.539512 containerd[1470]: time="2025-05-13T00:04:53.539466961Z" level=info msg="CreateContainer within sandbox \"3a1c792bdb0aacd9c67619b09a5d90d13115d73afb1e81c36c07f039ab56c6c8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5ce1bf0ac1dcc040106fa18861b38c0f41a987f365cba9324c5d49c0ad327e87\"" May 13 00:04:53.540471 containerd[1470]: time="2025-05-13T00:04:53.540435584Z" level=info msg="StartContainer for \"5ce1bf0ac1dcc040106fa18861b38c0f41a987f365cba9324c5d49c0ad327e87\"" May 13 00:04:53.540961 containerd[1470]: time="2025-05-13T00:04:53.540918515Z" level=info msg="shim disconnected" id=055d947895d95299181eed6bfc60457af63a2f0a8ad119ab2df413fcdc25df63 namespace=k8s.io May 13 00:04:53.540961 containerd[1470]: time="2025-05-13T00:04:53.540960363Z" level=warning msg="cleaning up after shim disconnected" id=055d947895d95299181eed6bfc60457af63a2f0a8ad119ab2df413fcdc25df63 namespace=k8s.io May 13 00:04:53.541185 containerd[1470]: time="2025-05-13T00:04:53.540968324Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:04:53.566309 systemd[1]: Started cri-containerd-5ce1bf0ac1dcc040106fa18861b38c0f41a987f365cba9324c5d49c0ad327e87.scope - libcontainer container 5ce1bf0ac1dcc040106fa18861b38c0f41a987f365cba9324c5d49c0ad327e87. May 13 00:04:53.591129 containerd[1470]: time="2025-05-13T00:04:53.591043361Z" level=info msg="StartContainer for \"5ce1bf0ac1dcc040106fa18861b38c0f41a987f365cba9324c5d49c0ad327e87\" returns successfully" May 13 00:04:54.148213 kubelet[1770]: E0513 00:04:54.148149 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:54.387911 kubelet[1770]: E0513 00:04:54.387537 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:54.389487 kubelet[1770]: E0513 00:04:54.389467 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:54.389639 containerd[1470]: time="2025-05-13T00:04:54.389554267Z" level=info msg="CreateContainer within sandbox \"498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:04:54.400576 containerd[1470]: time="2025-05-13T00:04:54.400134166Z" level=info msg="CreateContainer within sandbox \"498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0448c194e44ac10722f6521ed7a0c06d4e7aea97e13f62c5beae15d1fefc63f1\"" May 13 00:04:54.400814 containerd[1470]: time="2025-05-13T00:04:54.400782365Z" level=info msg="StartContainer for \"0448c194e44ac10722f6521ed7a0c06d4e7aea97e13f62c5beae15d1fefc63f1\"" May 13 00:04:54.415875 kubelet[1770]: I0513 00:04:54.415809 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gqmpp" podStartSLOduration=3.44619969 podStartE2EDuration="5.415793156s" podCreationTimestamp="2025-05-13 00:04:49 +0000 UTC" firstStartedPulling="2025-05-13 00:04:51.531843248 +0000 UTC m=+57.586298640" lastFinishedPulling="2025-05-13 00:04:53.501436714 +0000 UTC m=+59.555892106" observedRunningTime="2025-05-13 00:04:54.415427729 +0000 UTC m=+60.469883121" watchObservedRunningTime="2025-05-13 00:04:54.415793156 +0000 UTC m=+60.470248628" May 13 00:04:54.429279 systemd[1]: Started cri-containerd-0448c194e44ac10722f6521ed7a0c06d4e7aea97e13f62c5beae15d1fefc63f1.scope - libcontainer container 0448c194e44ac10722f6521ed7a0c06d4e7aea97e13f62c5beae15d1fefc63f1. May 13 00:04:54.452068 systemd[1]: cri-containerd-0448c194e44ac10722f6521ed7a0c06d4e7aea97e13f62c5beae15d1fefc63f1.scope: Deactivated successfully. May 13 00:04:54.458739 containerd[1470]: time="2025-05-13T00:04:54.458707582Z" level=info msg="StartContainer for \"0448c194e44ac10722f6521ed7a0c06d4e7aea97e13f62c5beae15d1fefc63f1\" returns successfully" May 13 00:04:54.472305 containerd[1470]: time="2025-05-13T00:04:54.460767880Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24c19a16_a1ef_4056_91b8_8aaa552e9d6e.slice/cri-containerd-0448c194e44ac10722f6521ed7a0c06d4e7aea97e13f62c5beae15d1fefc63f1.scope/memory.events\": no such file or directory" May 13 00:04:54.474181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0448c194e44ac10722f6521ed7a0c06d4e7aea97e13f62c5beae15d1fefc63f1-rootfs.mount: Deactivated successfully. May 13 00:04:54.479049 containerd[1470]: time="2025-05-13T00:04:54.478992700Z" level=info msg="shim disconnected" id=0448c194e44ac10722f6521ed7a0c06d4e7aea97e13f62c5beae15d1fefc63f1 namespace=k8s.io May 13 00:04:54.479049 containerd[1470]: time="2025-05-13T00:04:54.479044469Z" level=warning msg="cleaning up after shim disconnected" id=0448c194e44ac10722f6521ed7a0c06d4e7aea97e13f62c5beae15d1fefc63f1 namespace=k8s.io May 13 00:04:54.479049 containerd[1470]: time="2025-05-13T00:04:54.479053551Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:04:55.106210 kubelet[1770]: E0513 00:04:55.106162 1770 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:55.123028 containerd[1470]: time="2025-05-13T00:04:55.122994070Z" level=info msg="StopPodSandbox for \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\"" May 13 00:04:55.123347 containerd[1470]: time="2025-05-13T00:04:55.123083326Z" level=info msg="TearDown network for sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" successfully" May 13 00:04:55.123347 containerd[1470]: time="2025-05-13T00:04:55.123115172Z" level=info msg="StopPodSandbox for \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" returns successfully" May 13 00:04:55.123706 containerd[1470]: time="2025-05-13T00:04:55.123662109Z" level=info msg="RemovePodSandbox for \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\"" May 13 00:04:55.123706 containerd[1470]: time="2025-05-13T00:04:55.123696835Z" level=info msg="Forcibly stopping sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\"" May 13 00:04:55.123767 containerd[1470]: time="2025-05-13T00:04:55.123744764Z" level=info msg="TearDown network for sandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" successfully" May 13 00:04:55.126913 containerd[1470]: time="2025-05-13T00:04:55.126867121Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:04:55.126985 containerd[1470]: time="2025-05-13T00:04:55.126919770Z" level=info msg="RemovePodSandbox \"0eb2242d48b20adcdce6e6ea90c9219fc565cc663175453db60f28e96d25d0b2\" returns successfully" May 13 00:04:55.149083 kubelet[1770]: E0513 00:04:55.149051 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:55.262043 kubelet[1770]: E0513 00:04:55.262012 1770 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:04:55.393872 kubelet[1770]: E0513 00:04:55.393631 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:55.394344 kubelet[1770]: E0513 00:04:55.394265 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:55.395501 containerd[1470]: time="2025-05-13T00:04:55.395464688Z" level=info msg="CreateContainer within sandbox \"498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:04:55.411535 containerd[1470]: time="2025-05-13T00:04:55.411461623Z" level=info msg="CreateContainer within sandbox \"498fe8e0fe4c4579b277c5ca6f840923ed24a68b3bfad1065a6f5a1288c2de9f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"86d4939ecac4809cedc332a5735760d47f0dd8efd1ae1c4c2971d91a7fc97af8\"" May 13 00:04:55.411932 containerd[1470]: time="2025-05-13T00:04:55.411905302Z" level=info msg="StartContainer for \"86d4939ecac4809cedc332a5735760d47f0dd8efd1ae1c4c2971d91a7fc97af8\"" May 13 00:04:55.437267 systemd[1]: Started cri-containerd-86d4939ecac4809cedc332a5735760d47f0dd8efd1ae1c4c2971d91a7fc97af8.scope - libcontainer container 86d4939ecac4809cedc332a5735760d47f0dd8efd1ae1c4c2971d91a7fc97af8. May 13 00:04:55.461326 containerd[1470]: time="2025-05-13T00:04:55.461273991Z" level=info msg="StartContainer for \"86d4939ecac4809cedc332a5735760d47f0dd8efd1ae1c4c2971d91a7fc97af8\" returns successfully" May 13 00:04:55.718181 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 13 00:04:56.150112 kubelet[1770]: E0513 00:04:56.150069 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:56.406237 kubelet[1770]: E0513 00:04:56.406125 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:56.422769 kubelet[1770]: I0513 00:04:56.422697 1770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ggwtv" podStartSLOduration=7.422681464 podStartE2EDuration="7.422681464s" podCreationTimestamp="2025-05-13 00:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:04:56.422664021 +0000 UTC m=+62.477119413" watchObservedRunningTime="2025-05-13 00:04:56.422681464 +0000 UTC m=+62.477136816" May 13 00:04:56.499294 kubelet[1770]: I0513 00:04:56.499226 1770 setters.go:602] "Node became not ready" node="10.0.0.133" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:04:56Z","lastTransitionTime":"2025-05-13T00:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:04:57.150910 kubelet[1770]: E0513 00:04:57.150855 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:57.458026 kubelet[1770]: E0513 00:04:57.457912 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:04:58.151180 kubelet[1770]: E0513 00:04:58.151135 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:58.648831 systemd-networkd[1392]: lxc_health: Link UP May 13 00:04:58.658282 systemd-networkd[1392]: lxc_health: Gained carrier May 13 00:04:59.152337 kubelet[1770]: E0513 00:04:59.152278 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:59.461226 kubelet[1770]: E0513 00:04:59.460984 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:05:00.152939 kubelet[1770]: E0513 00:05:00.152891 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:05:00.412528 kubelet[1770]: E0513 00:05:00.412370 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:05:00.601244 systemd-networkd[1392]: lxc_health: Gained IPv6LL May 13 00:05:01.153793 kubelet[1770]: E0513 00:05:01.153730 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:05:01.414648 kubelet[1770]: E0513 00:05:01.414208 1770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:05:02.154291 kubelet[1770]: E0513 00:05:02.154249 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:05:03.154771 kubelet[1770]: E0513 00:05:03.154721 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:05:04.155904 kubelet[1770]: E0513 00:05:04.155845 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:05:05.156709 kubelet[1770]: E0513 00:05:05.156651 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:05:06.157480 kubelet[1770]: E0513 00:05:06.157433 1770 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"