May 13 00:20:10.722762 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:20:10.722780 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon May 12 23:22:00 -00 2025 May 13 00:20:10.722788 kernel: efi: EFI v2.70 by EDK II May 13 00:20:10.722793 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 13 00:20:10.722798 kernel: random: crng init done May 13 00:20:10.722804 kernel: ACPI: Early table checksum verification disabled May 13 00:20:10.722810 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 13 00:20:10.722817 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:20:10.722822 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:10.722827 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:10.722833 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:10.722838 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:10.722843 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:10.722849 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:10.722857 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:10.722862 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:10.722868 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:10.722874 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:20:10.722879 kernel: NUMA: Failed to initialise from firmware May 13 00:20:10.722885 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:20:10.722891 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 13 00:20:10.722896 kernel: Zone ranges: May 13 00:20:10.722902 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:20:10.722908 kernel: DMA32 empty May 13 00:20:10.722914 kernel: Normal empty May 13 00:20:10.722919 kernel: Movable zone start for each node May 13 00:20:10.722925 kernel: Early memory node ranges May 13 00:20:10.722930 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 13 00:20:10.722936 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 13 00:20:10.722941 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 13 00:20:10.722947 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 13 00:20:10.722953 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 13 00:20:10.722958 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 13 00:20:10.722964 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 13 00:20:10.722969 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:20:10.722976 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:20:10.722982 kernel: psci: probing for conduit method from ACPI. May 13 00:20:10.722987 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:20:10.722993 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:20:10.722999 kernel: psci: Trusted OS migration not required May 13 00:20:10.723007 kernel: psci: SMC Calling Convention v1.1 May 13 00:20:10.723013 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:20:10.723020 kernel: ACPI: SRAT not present May 13 00:20:10.723026 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 13 00:20:10.723032 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 13 00:20:10.723038 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:20:10.723044 kernel: Detected PIPT I-cache on CPU0 May 13 00:20:10.723050 kernel: CPU features: detected: GIC system register CPU interface May 13 00:20:10.723056 kernel: CPU features: detected: Hardware dirty bit management May 13 00:20:10.723062 kernel: CPU features: detected: Spectre-v4 May 13 00:20:10.723068 kernel: CPU features: detected: Spectre-BHB May 13 00:20:10.723075 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:20:10.723081 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:20:10.723087 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:20:10.723093 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:20:10.723099 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:20:10.723105 kernel: Policy zone: DMA May 13 00:20:10.723112 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:20:10.723118 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:20:10.723124 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:20:10.723130 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:20:10.723136 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:20:10.723144 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 13 00:20:10.723150 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:20:10.723156 kernel: trace event string verifier disabled May 13 00:20:10.723162 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:20:10.723169 kernel: rcu: RCU event tracing is enabled. May 13 00:20:10.723175 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:20:10.723181 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:20:10.723187 kernel: Tracing variant of Tasks RCU enabled. May 13 00:20:10.723193 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:20:10.723200 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:20:10.723205 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:20:10.723213 kernel: GICv3: 256 SPIs implemented May 13 00:20:10.723224 kernel: GICv3: 0 Extended SPIs implemented May 13 00:20:10.723230 kernel: GICv3: Distributor has no Range Selector support May 13 00:20:10.723239 kernel: Root IRQ handler: gic_handle_irq May 13 00:20:10.723245 kernel: GICv3: 16 PPIs implemented May 13 00:20:10.723251 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:20:10.723257 kernel: ACPI: SRAT not present May 13 00:20:10.723263 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:20:10.723269 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:20:10.723275 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 13 00:20:10.723281 kernel: GICv3: using LPI property table @0x00000000400d0000 May 13 00:20:10.723287 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 13 00:20:10.723295 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:20:10.723301 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:20:10.723307 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:20:10.723314 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:20:10.723320 kernel: arm-pv: using stolen time PV May 13 00:20:10.723326 kernel: Console: colour dummy device 80x25 May 13 00:20:10.723332 kernel: ACPI: Core revision 20210730 May 13 00:20:10.723338 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:20:10.723345 kernel: pid_max: default: 32768 minimum: 301 May 13 00:20:10.723351 kernel: LSM: Security Framework initializing May 13 00:20:10.723358 kernel: SELinux: Initializing. May 13 00:20:10.723437 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:20:10.723447 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:20:10.723453 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:20:10.723459 kernel: rcu: Hierarchical SRCU implementation. May 13 00:20:10.723466 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:20:10.723473 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:20:10.723479 kernel: Remapping and enabling EFI services. May 13 00:20:10.723486 kernel: smp: Bringing up secondary CPUs ... May 13 00:20:10.723494 kernel: Detected PIPT I-cache on CPU1 May 13 00:20:10.723501 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:20:10.723507 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 13 00:20:10.723514 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:20:10.723520 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:20:10.723526 kernel: Detected PIPT I-cache on CPU2 May 13 00:20:10.723533 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:20:10.723544 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 13 00:20:10.723550 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:20:10.723556 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:20:10.723564 kernel: Detected PIPT I-cache on CPU3 May 13 00:20:10.723570 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:20:10.723576 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 13 00:20:10.723583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:20:10.723593 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:20:10.723601 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:20:10.723608 kernel: SMP: Total of 4 processors activated. May 13 00:20:10.723614 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:20:10.723621 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:20:10.723627 kernel: CPU features: detected: Common not Private translations May 13 00:20:10.723634 kernel: CPU features: detected: CRC32 instructions May 13 00:20:10.723640 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:20:10.723648 kernel: CPU features: detected: LSE atomic instructions May 13 00:20:10.723655 kernel: CPU features: detected: Privileged Access Never May 13 00:20:10.723661 kernel: CPU features: detected: RAS Extension Support May 13 00:20:10.723668 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:20:10.723674 kernel: CPU: All CPU(s) started at EL1 May 13 00:20:10.723682 kernel: alternatives: patching kernel code May 13 00:20:10.723689 kernel: devtmpfs: initialized May 13 00:20:10.723695 kernel: KASLR enabled May 13 00:20:10.723702 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:20:10.723708 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:20:10.723715 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:20:10.723722 kernel: SMBIOS 3.0.0 present. May 13 00:20:10.723729 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 13 00:20:10.723735 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:20:10.723743 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:20:10.723750 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:20:10.723756 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:20:10.723763 kernel: audit: initializing netlink subsys (disabled) May 13 00:20:10.723769 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 May 13 00:20:10.723776 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:20:10.723783 kernel: cpuidle: using governor menu May 13 00:20:10.723789 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:20:10.723796 kernel: ASID allocator initialised with 32768 entries May 13 00:20:10.723804 kernel: ACPI: bus type PCI registered May 13 00:20:10.723810 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:20:10.723817 kernel: Serial: AMBA PL011 UART driver May 13 00:20:10.723823 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:20:10.723830 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:20:10.723837 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:20:10.723843 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:20:10.723850 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:20:10.723857 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:20:10.723865 kernel: ACPI: Added _OSI(Module Device) May 13 00:20:10.723871 kernel: ACPI: Added _OSI(Processor Device) May 13 00:20:10.723878 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:20:10.723885 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:20:10.723891 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:20:10.723898 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:20:10.723904 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:20:10.723911 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:20:10.723918 kernel: ACPI: Interpreter enabled May 13 00:20:10.723926 kernel: ACPI: Using GIC for interrupt routing May 13 00:20:10.723932 kernel: ACPI: MCFG table detected, 1 entries May 13 00:20:10.723939 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:20:10.723945 kernel: printk: console [ttyAMA0] enabled May 13 00:20:10.723952 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:20:10.724082 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:20:10.724146 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:20:10.724204 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:20:10.724261 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:20:10.724317 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:20:10.724326 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:20:10.724332 kernel: PCI host bridge to bus 0000:00 May 13 00:20:10.724426 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:20:10.724497 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:20:10.724550 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:20:10.724606 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:20:10.724677 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:20:10.724746 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:20:10.724816 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:20:10.724878 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:20:10.724939 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:20:10.725016 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:20:10.725079 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:20:10.725139 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:20:10.725191 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:20:10.725244 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:20:10.725297 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:20:10.725306 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:20:10.725312 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:20:10.725321 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:20:10.725328 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:20:10.725335 kernel: iommu: Default domain type: Translated May 13 00:20:10.725341 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:20:10.725348 kernel: vgaarb: loaded May 13 00:20:10.725355 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:20:10.725361 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:20:10.725403 kernel: PTP clock support registered May 13 00:20:10.725410 kernel: Registered efivars operations May 13 00:20:10.725421 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:20:10.725427 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:20:10.725442 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:20:10.725449 kernel: pnp: PnP ACPI init May 13 00:20:10.725526 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:20:10.725536 kernel: pnp: PnP ACPI: found 1 devices May 13 00:20:10.725542 kernel: NET: Registered PF_INET protocol family May 13 00:20:10.725549 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:20:10.725558 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:20:10.725564 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:20:10.725571 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:20:10.725578 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:20:10.725585 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:20:10.725591 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:20:10.725598 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:20:10.725605 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:20:10.725612 kernel: PCI: CLS 0 bytes, default 64 May 13 00:20:10.725620 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:20:10.725626 kernel: kvm [1]: HYP mode not available May 13 00:20:10.725633 kernel: Initialise system trusted keyrings May 13 00:20:10.725639 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:20:10.725646 kernel: Key type asymmetric registered May 13 00:20:10.725653 kernel: Asymmetric key parser 'x509' registered May 13 00:20:10.725659 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:20:10.725666 kernel: io scheduler mq-deadline registered May 13 00:20:10.725672 kernel: io scheduler kyber registered May 13 00:20:10.725680 kernel: io scheduler bfq registered May 13 00:20:10.725687 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:20:10.725693 kernel: ACPI: button: Power Button [PWRB] May 13 00:20:10.725701 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:20:10.725760 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:20:10.725769 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:20:10.725776 kernel: thunder_xcv, ver 1.0 May 13 00:20:10.725783 kernel: thunder_bgx, ver 1.0 May 13 00:20:10.725789 kernel: nicpf, ver 1.0 May 13 00:20:10.725797 kernel: nicvf, ver 1.0 May 13 00:20:10.725867 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:20:10.725922 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:20:10 UTC (1747095610) May 13 00:20:10.725931 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:20:10.725937 kernel: NET: Registered PF_INET6 protocol family May 13 00:20:10.725949 kernel: Segment Routing with IPv6 May 13 00:20:10.725956 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:20:10.725962 kernel: NET: Registered PF_PACKET protocol family May 13 00:20:10.725971 kernel: Key type dns_resolver registered May 13 00:20:10.725977 kernel: registered taskstats version 1 May 13 00:20:10.725984 kernel: Loading compiled-in X.509 certificates May 13 00:20:10.725991 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: d291b704d59536a3c0ba96fd6f5a99459de8de99' May 13 00:20:10.725997 kernel: Key type .fscrypt registered May 13 00:20:10.726004 kernel: Key type fscrypt-provisioning registered May 13 00:20:10.726011 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:20:10.726017 kernel: ima: Allocated hash algorithm: sha1 May 13 00:20:10.726024 kernel: ima: No architecture policies found May 13 00:20:10.726031 kernel: clk: Disabling unused clocks May 13 00:20:10.726038 kernel: Freeing unused kernel memory: 36480K May 13 00:20:10.726044 kernel: Run /init as init process May 13 00:20:10.726051 kernel: with arguments: May 13 00:20:10.726060 kernel: /init May 13 00:20:10.726067 kernel: with environment: May 13 00:20:10.726073 kernel: HOME=/ May 13 00:20:10.726080 kernel: TERM=linux May 13 00:20:10.726086 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:20:10.726096 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:20:10.726107 systemd[1]: Detected virtualization kvm. May 13 00:20:10.726114 systemd[1]: Detected architecture arm64. May 13 00:20:10.726121 systemd[1]: Running in initrd. May 13 00:20:10.726128 systemd[1]: No hostname configured, using default hostname. May 13 00:20:10.726135 systemd[1]: Hostname set to . May 13 00:20:10.726142 systemd[1]: Initializing machine ID from VM UUID. May 13 00:20:10.726151 systemd[1]: Queued start job for default target initrd.target. May 13 00:20:10.726158 systemd[1]: Started systemd-ask-password-console.path. May 13 00:20:10.726165 systemd[1]: Reached target cryptsetup.target. May 13 00:20:10.726172 systemd[1]: Reached target paths.target. May 13 00:20:10.726179 systemd[1]: Reached target slices.target. May 13 00:20:10.726186 systemd[1]: Reached target swap.target. May 13 00:20:10.726193 systemd[1]: Reached target timers.target. May 13 00:20:10.726200 systemd[1]: Listening on iscsid.socket. May 13 00:20:10.726208 systemd[1]: Listening on iscsiuio.socket. May 13 00:20:10.726215 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:20:10.726222 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:20:10.726229 systemd[1]: Listening on systemd-journald.socket. May 13 00:20:10.726236 systemd[1]: Listening on systemd-networkd.socket. May 13 00:20:10.726243 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:20:10.726250 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:20:10.726257 systemd[1]: Reached target sockets.target. May 13 00:20:10.726265 systemd[1]: Starting kmod-static-nodes.service... May 13 00:20:10.726272 systemd[1]: Finished network-cleanup.service. May 13 00:20:10.726279 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:20:10.726286 systemd[1]: Starting systemd-journald.service... May 13 00:20:10.726293 systemd[1]: Starting systemd-modules-load.service... May 13 00:20:10.726300 systemd[1]: Starting systemd-resolved.service... May 13 00:20:10.726307 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:20:10.726314 systemd[1]: Finished kmod-static-nodes.service. May 13 00:20:10.726322 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:20:10.726330 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:20:10.726337 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:20:10.726344 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:20:10.726354 systemd-journald[290]: Journal started May 13 00:20:10.726438 systemd-journald[290]: Runtime Journal (/run/log/journal/020dcfe888384aa19a3138b6c405b614) is 6.0M, max 48.7M, 42.6M free. May 13 00:20:10.715570 systemd-modules-load[291]: Inserted module 'overlay' May 13 00:20:10.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.729379 systemd[1]: Started systemd-journald.service. May 13 00:20:10.729400 kernel: audit: type=1130 audit(1747095610.728:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.729472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:20:10.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.736397 kernel: audit: type=1130 audit(1747095610.732:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.741395 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:20:10.741173 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:20:10.742741 systemd[1]: Starting dracut-cmdline.service... May 13 00:20:10.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.747992 kernel: audit: type=1130 audit(1747095610.741:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.748877 systemd-modules-load[291]: Inserted module 'br_netfilter' May 13 00:20:10.750321 kernel: Bridge firewalling registered May 13 00:20:10.751994 systemd-resolved[292]: Positive Trust Anchors: May 13 00:20:10.752005 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:20:10.752032 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:20:10.762148 kernel: audit: type=1130 audit(1747095610.758:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.756137 systemd-resolved[292]: Defaulting to hostname 'linux'. May 13 00:20:10.763826 kernel: SCSI subsystem initialized May 13 00:20:10.763839 dracut-cmdline[309]: dracut-dracut-053 May 13 00:20:10.763839 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:20:10.758188 systemd[1]: Started systemd-resolved.service. May 13 00:20:10.758965 systemd[1]: Reached target nss-lookup.target. May 13 00:20:10.772226 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:20:10.772248 kernel: device-mapper: uevent: version 1.0.3 May 13 00:20:10.772258 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:20:10.774463 systemd-modules-load[291]: Inserted module 'dm_multipath' May 13 00:20:10.775232 systemd[1]: Finished systemd-modules-load.service. May 13 00:20:10.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.776669 systemd[1]: Starting systemd-sysctl.service... May 13 00:20:10.779940 kernel: audit: type=1130 audit(1747095610.775:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.784620 systemd[1]: Finished systemd-sysctl.service. May 13 00:20:10.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.788390 kernel: audit: type=1130 audit(1747095610.784:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.821389 kernel: Loading iSCSI transport class v2.0-870. May 13 00:20:10.833387 kernel: iscsi: registered transport (tcp) May 13 00:20:10.848396 kernel: iscsi: registered transport (qla4xxx) May 13 00:20:10.848408 kernel: QLogic iSCSI HBA Driver May 13 00:20:10.881053 systemd[1]: Finished dracut-cmdline.service. May 13 00:20:10.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.882502 systemd[1]: Starting dracut-pre-udev.service... May 13 00:20:10.885578 kernel: audit: type=1130 audit(1747095610.881:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:10.924403 kernel: raid6: neonx8 gen() 13658 MB/s May 13 00:20:10.941390 kernel: raid6: neonx8 xor() 10735 MB/s May 13 00:20:10.958388 kernel: raid6: neonx4 gen() 13441 MB/s May 13 00:20:10.975389 kernel: raid6: neonx4 xor() 10899 MB/s May 13 00:20:10.992392 kernel: raid6: neonx2 gen() 12877 MB/s May 13 00:20:11.009394 kernel: raid6: neonx2 xor() 10153 MB/s May 13 00:20:11.026391 kernel: raid6: neonx1 gen() 10498 MB/s May 13 00:20:11.043387 kernel: raid6: neonx1 xor() 8698 MB/s May 13 00:20:11.060389 kernel: raid6: int64x8 gen() 5540 MB/s May 13 00:20:11.077388 kernel: raid6: int64x8 xor() 3520 MB/s May 13 00:20:11.094392 kernel: raid6: int64x4 gen() 7189 MB/s May 13 00:20:11.111390 kernel: raid6: int64x4 xor() 3829 MB/s May 13 00:20:11.128392 kernel: raid6: int64x2 gen() 6111 MB/s May 13 00:20:11.145401 kernel: raid6: int64x2 xor() 3292 MB/s May 13 00:20:11.162388 kernel: raid6: int64x1 gen() 4979 MB/s May 13 00:20:11.180818 kernel: raid6: int64x1 xor() 2616 MB/s May 13 00:20:11.180870 kernel: raid6: using algorithm neonx8 gen() 13658 MB/s May 13 00:20:11.180891 kernel: raid6: .... xor() 10735 MB/s, rmw enabled May 13 00:20:11.180900 kernel: raid6: using neon recovery algorithm May 13 00:20:11.196418 kernel: xor: measuring software checksum speed May 13 00:20:11.197820 kernel: 8regs : 649 MB/sec May 13 00:20:11.197833 kernel: 32regs : 20691 MB/sec May 13 00:20:11.198467 kernel: arm64_neon : 27775 MB/sec May 13 00:20:11.198477 kernel: xor: using function: arm64_neon (27775 MB/sec) May 13 00:20:11.273388 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 13 00:20:11.290678 systemd[1]: Finished dracut-pre-udev.service. May 13 00:20:11.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:11.293000 audit: BPF prog-id=7 op=LOAD May 13 00:20:11.295050 kernel: audit: type=1130 audit(1747095611.290:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:11.295073 kernel: audit: type=1334 audit(1747095611.293:10): prog-id=7 op=LOAD May 13 00:20:11.294000 audit: BPF prog-id=8 op=LOAD May 13 00:20:11.295615 systemd[1]: Starting systemd-udevd.service... May 13 00:20:11.308118 systemd-udevd[491]: Using default interface naming scheme 'v252'. May 13 00:20:11.311570 systemd[1]: Started systemd-udevd.service. May 13 00:20:11.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:11.313590 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:20:11.326536 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation May 13 00:20:11.357017 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:20:11.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:11.358633 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:20:11.396194 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:20:11.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:11.431493 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:20:11.437853 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:20:11.437867 kernel: GPT:9289727 != 19775487 May 13 00:20:11.437876 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:20:11.437885 kernel: GPT:9289727 != 19775487 May 13 00:20:11.437893 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:20:11.437901 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:11.454389 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (548) May 13 00:20:11.458296 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:20:11.463397 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:20:11.464229 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:20:11.468595 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:20:11.472622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:20:11.475071 systemd[1]: Starting disk-uuid.service... May 13 00:20:11.486698 disk-uuid[563]: Primary Header is updated. May 13 00:20:11.486698 disk-uuid[563]: Secondary Entries is updated. May 13 00:20:11.486698 disk-uuid[563]: Secondary Header is updated. May 13 00:20:11.490392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:11.524392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:12.525416 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:12.525470 disk-uuid[564]: The operation has completed successfully. May 13 00:20:12.550750 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:20:12.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.550848 systemd[1]: Finished disk-uuid.service. May 13 00:20:12.552264 systemd[1]: Starting verity-setup.service... May 13 00:20:12.568244 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:20:12.591014 systemd[1]: Found device dev-mapper-usr.device. May 13 00:20:12.593022 systemd[1]: Mounting sysusr-usr.mount... May 13 00:20:12.594773 systemd[1]: Finished verity-setup.service. May 13 00:20:12.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.641180 systemd[1]: Mounted sysusr-usr.mount. May 13 00:20:12.642238 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:20:12.641903 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:20:12.642626 systemd[1]: Starting ignition-setup.service... May 13 00:20:12.644360 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:20:12.651547 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:20:12.651584 kernel: BTRFS info (device vda6): using free space tree May 13 00:20:12.651594 kernel: BTRFS info (device vda6): has skinny extents May 13 00:20:12.660008 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:20:12.665581 systemd[1]: Finished ignition-setup.service. May 13 00:20:12.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.667670 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:20:12.724069 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:20:12.725919 systemd[1]: Starting systemd-networkd.service... May 13 00:20:12.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.725000 audit: BPF prog-id=9 op=LOAD May 13 00:20:12.741576 ignition[654]: Ignition 2.14.0 May 13 00:20:12.742265 ignition[654]: Stage: fetch-offline May 13 00:20:12.742850 ignition[654]: no configs at "/usr/lib/ignition/base.d" May 13 00:20:12.743556 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:12.744520 ignition[654]: parsed url from cmdline: "" May 13 00:20:12.744575 ignition[654]: no config URL provided May 13 00:20:12.745156 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:20:12.746046 ignition[654]: no config at "/usr/lib/ignition/user.ign" May 13 00:20:12.746766 ignition[654]: op(1): [started] loading QEMU firmware config module May 13 00:20:12.747562 ignition[654]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:20:12.747706 systemd-networkd[739]: lo: Link UP May 13 00:20:12.747717 systemd-networkd[739]: lo: Gained carrier May 13 00:20:12.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.748181 systemd-networkd[739]: Enumeration completed May 13 00:20:12.748265 systemd[1]: Started systemd-networkd.service. May 13 00:20:12.749090 systemd[1]: Reached target network.target. May 13 00:20:12.749425 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:20:12.750942 systemd[1]: Starting iscsiuio.service... May 13 00:20:12.751344 systemd-networkd[739]: eth0: Link UP May 13 00:20:12.751348 systemd-networkd[739]: eth0: Gained carrier May 13 00:20:12.757339 ignition[654]: op(1): [finished] loading QEMU firmware config module May 13 00:20:12.761900 systemd[1]: Started iscsiuio.service. May 13 00:20:12.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.763452 systemd[1]: Starting iscsid.service... May 13 00:20:12.767163 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:20:12.767163 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:20:12.767163 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:20:12.767163 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:20:12.767163 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:20:12.767163 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:20:12.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.770079 systemd[1]: Started iscsid.service. May 13 00:20:12.773455 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:20:12.774207 systemd[1]: Starting dracut-initqueue.service... May 13 00:20:12.784978 systemd[1]: Finished dracut-initqueue.service. May 13 00:20:12.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.785952 systemd[1]: Reached target remote-fs-pre.target. May 13 00:20:12.787203 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:20:12.788653 systemd[1]: Reached target remote-fs.target. May 13 00:20:12.790835 systemd[1]: Starting dracut-pre-mount.service... May 13 00:20:12.798240 systemd[1]: Finished dracut-pre-mount.service. May 13 00:20:12.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.813378 ignition[654]: parsing config with SHA512: c35a6fb230491597df9a6a081a19510081954dabd070d246891b3d7582aa0f774d4558c9e666164526a44c2a03bd7518f4e6b89d9947cb03a68dad45e3ee0350 May 13 00:20:12.821322 unknown[654]: fetched base config from "system" May 13 00:20:12.821340 unknown[654]: fetched user config from "qemu" May 13 00:20:12.822081 ignition[654]: fetch-offline: fetch-offline passed May 13 00:20:12.823311 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:20:12.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.822149 ignition[654]: Ignition finished successfully May 13 00:20:12.824532 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:20:12.825305 systemd[1]: Starting ignition-kargs.service... May 13 00:20:12.834793 ignition[760]: Ignition 2.14.0 May 13 00:20:12.834809 ignition[760]: Stage: kargs May 13 00:20:12.834899 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 13 00:20:12.834908 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:12.835866 ignition[760]: kargs: kargs passed May 13 00:20:12.837388 systemd[1]: Finished ignition-kargs.service. May 13 00:20:12.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.835905 ignition[760]: Ignition finished successfully May 13 00:20:12.839023 systemd[1]: Starting ignition-disks.service... May 13 00:20:12.845249 ignition[766]: Ignition 2.14.0 May 13 00:20:12.845259 ignition[766]: Stage: disks May 13 00:20:12.845350 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 13 00:20:12.845360 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:12.847663 systemd[1]: Finished ignition-disks.service. May 13 00:20:12.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.846236 ignition[766]: disks: disks passed May 13 00:20:12.849002 systemd[1]: Reached target initrd-root-device.target. May 13 00:20:12.846277 ignition[766]: Ignition finished successfully May 13 00:20:12.850017 systemd[1]: Reached target local-fs-pre.target. May 13 00:20:12.851035 systemd[1]: Reached target local-fs.target. May 13 00:20:12.852118 systemd[1]: Reached target sysinit.target. May 13 00:20:12.853110 systemd[1]: Reached target basic.target. May 13 00:20:12.854971 systemd[1]: Starting systemd-fsck-root.service... May 13 00:20:12.865518 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 13 00:20:12.869653 systemd[1]: Finished systemd-fsck-root.service. May 13 00:20:12.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.871275 systemd[1]: Mounting sysroot.mount... May 13 00:20:12.877387 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:20:12.877883 systemd[1]: Mounted sysroot.mount. May 13 00:20:12.878630 systemd[1]: Reached target initrd-root-fs.target. May 13 00:20:12.880745 systemd[1]: Mounting sysroot-usr.mount... May 13 00:20:12.881639 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:20:12.881681 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:20:12.881708 systemd[1]: Reached target ignition-diskful.target. May 13 00:20:12.883742 systemd[1]: Mounted sysroot-usr.mount. May 13 00:20:12.886562 systemd[1]: Starting initrd-setup-root.service... May 13 00:20:12.890765 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:20:12.895426 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory May 13 00:20:12.899666 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:20:12.904303 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:20:12.930624 systemd[1]: Finished initrd-setup-root.service. May 13 00:20:12.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.932118 systemd[1]: Starting ignition-mount.service... May 13 00:20:12.933414 systemd[1]: Starting sysroot-boot.service... May 13 00:20:12.938102 bash[825]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:20:12.947354 ignition[827]: INFO : Ignition 2.14.0 May 13 00:20:12.947354 ignition[827]: INFO : Stage: mount May 13 00:20:12.948588 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:20:12.948588 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:12.948588 ignition[827]: INFO : mount: mount passed May 13 00:20:12.948588 ignition[827]: INFO : Ignition finished successfully May 13 00:20:12.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:12.949281 systemd[1]: Finished ignition-mount.service. May 13 00:20:12.954957 systemd[1]: Finished sysroot-boot.service. May 13 00:20:12.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:13.603263 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:20:13.609388 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (835) May 13 00:20:13.612854 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:20:13.612902 kernel: BTRFS info (device vda6): using free space tree May 13 00:20:13.612913 kernel: BTRFS info (device vda6): has skinny extents May 13 00:20:13.615670 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:20:13.617103 systemd[1]: Starting ignition-files.service... May 13 00:20:13.630827 ignition[855]: INFO : Ignition 2.14.0 May 13 00:20:13.630827 ignition[855]: INFO : Stage: files May 13 00:20:13.632147 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:20:13.632147 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:13.632147 ignition[855]: DEBUG : files: compiled without relabeling support, skipping May 13 00:20:13.637525 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:20:13.637525 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:20:13.642364 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:20:13.643505 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:20:13.644753 unknown[855]: wrote ssh authorized keys file for user: core May 13 00:20:13.645753 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:20:13.645753 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:20:13.645753 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:20:13.645753 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:20:13.645753 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 00:20:14.240665 systemd-networkd[739]: eth0: Gained IPv6LL May 13 00:20:14.316237 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:20:15.431377 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:20:15.433129 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:20:15.433129 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 00:20:15.691174 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 13 00:20:15.769235 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:20:15.770725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 13 00:20:15.770725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:20:15.770725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:20:15.770725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:20:15.770725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:20:15.770725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:20:15.770725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:20:15.770725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:20:15.770725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:20:15.770725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:20:15.783270 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:20:15.783270 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:20:15.783270 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:20:15.783270 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 00:20:16.033878 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 13 00:20:16.363447 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:20:16.363447 ignition[855]: INFO : files: op(d): [started] processing unit "containerd.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:20:16.365961 ignition[855]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:20:16.365961 ignition[855]: INFO : files: op(d): [finished] processing unit "containerd.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:20:16.365961 ignition[855]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:20:16.417351 ignition[855]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:20:16.418655 ignition[855]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:20:16.418655 ignition[855]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:20:16.418655 ignition[855]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:20:16.418655 ignition[855]: INFO : files: files passed May 13 00:20:16.418655 ignition[855]: INFO : Ignition finished successfully May 13 00:20:16.430358 kernel: kauditd_printk_skb: 22 callbacks suppressed May 13 00:20:16.430404 kernel: audit: type=1130 audit(1747095616.422:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.421061 systemd[1]: Finished ignition-files.service. May 13 00:20:16.423917 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:20:16.432710 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:20:16.429680 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:20:16.435319 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:20:16.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.430560 systemd[1]: Starting ignition-quench.service... May 13 00:20:16.447005 kernel: audit: type=1130 audit(1747095616.435:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.447027 kernel: audit: type=1130 audit(1747095616.443:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.447036 kernel: audit: type=1131 audit(1747095616.443:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.434466 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:20:16.436248 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:20:16.436325 systemd[1]: Finished ignition-quench.service. May 13 00:20:16.444323 systemd[1]: Reached target ignition-complete.target. May 13 00:20:16.450689 systemd[1]: Starting initrd-parse-etc.service... May 13 00:20:16.464332 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:20:16.464456 systemd[1]: Finished initrd-parse-etc.service. May 13 00:20:16.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.465831 systemd[1]: Reached target initrd-fs.target. May 13 00:20:16.471939 kernel: audit: type=1130 audit(1747095616.465:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.471969 kernel: audit: type=1131 audit(1747095616.465:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.471482 systemd[1]: Reached target initrd.target. May 13 00:20:16.472515 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:20:16.473284 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:20:16.483715 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:20:16.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.487378 kernel: audit: type=1130 audit(1747095616.483:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.485167 systemd[1]: Starting initrd-cleanup.service... May 13 00:20:16.493537 systemd[1]: Stopped target nss-lookup.target. May 13 00:20:16.494201 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:20:16.495316 systemd[1]: Stopped target timers.target. May 13 00:20:16.496318 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:20:16.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.496456 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:20:16.501313 kernel: audit: type=1131 audit(1747095616.497:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.497509 systemd[1]: Stopped target initrd.target. May 13 00:20:16.500907 systemd[1]: Stopped target basic.target. May 13 00:20:16.501966 systemd[1]: Stopped target ignition-complete.target. May 13 00:20:16.503086 systemd[1]: Stopped target ignition-diskful.target. May 13 00:20:16.504159 systemd[1]: Stopped target initrd-root-device.target. May 13 00:20:16.505283 systemd[1]: Stopped target remote-fs.target. May 13 00:20:16.506296 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:20:16.507437 systemd[1]: Stopped target sysinit.target. May 13 00:20:16.508538 systemd[1]: Stopped target local-fs.target. May 13 00:20:16.509549 systemd[1]: Stopped target local-fs-pre.target. May 13 00:20:16.510557 systemd[1]: Stopped target swap.target. May 13 00:20:16.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.515420 kernel: audit: type=1131 audit(1747095616.512:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.511509 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:20:16.511629 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:20:16.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.512638 systemd[1]: Stopped target cryptsetup.target. May 13 00:20:16.521119 kernel: audit: type=1131 audit(1747095616.516:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.515984 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:20:16.516092 systemd[1]: Stopped dracut-initqueue.service. May 13 00:20:16.517267 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:20:16.517363 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:20:16.520801 systemd[1]: Stopped target paths.target. May 13 00:20:16.521669 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:20:16.523447 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:20:16.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.524740 systemd[1]: Stopped target slices.target. May 13 00:20:16.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.525982 systemd[1]: Stopped target sockets.target. May 13 00:20:16.527004 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:20:16.533146 iscsid[745]: iscsid shutting down. May 13 00:20:16.527110 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:20:16.528325 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:20:16.528438 systemd[1]: Stopped ignition-files.service. May 13 00:20:16.530253 systemd[1]: Stopping ignition-mount.service... May 13 00:20:16.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.531137 systemd[1]: Stopping iscsid.service... May 13 00:20:16.535870 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:20:16.540144 ignition[895]: INFO : Ignition 2.14.0 May 13 00:20:16.540144 ignition[895]: INFO : Stage: umount May 13 00:20:16.540144 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:20:16.540144 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:16.540144 ignition[895]: INFO : umount: umount passed May 13 00:20:16.540144 ignition[895]: INFO : Ignition finished successfully May 13 00:20:16.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.536036 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:20:16.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.537821 systemd[1]: Stopping sysroot-boot.service... May 13 00:20:16.540459 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:20:16.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.540617 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:20:16.542495 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:20:16.542605 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:20:16.545910 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:20:16.546018 systemd[1]: Stopped iscsid.service. May 13 00:20:16.547638 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:20:16.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.547737 systemd[1]: Stopped ignition-mount.service. May 13 00:20:16.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.551321 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:20:16.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.551962 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:20:16.552033 systemd[1]: Closed iscsid.socket. May 13 00:20:16.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.553210 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:20:16.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.553253 systemd[1]: Stopped ignition-disks.service. May 13 00:20:16.554321 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:20:16.554360 systemd[1]: Stopped ignition-kargs.service. May 13 00:20:16.556119 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:20:16.556162 systemd[1]: Stopped ignition-setup.service. May 13 00:20:16.557185 systemd[1]: Stopping iscsiuio.service... May 13 00:20:16.559750 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:20:16.559856 systemd[1]: Finished initrd-cleanup.service. May 13 00:20:16.560849 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:20:16.560930 systemd[1]: Stopped iscsiuio.service. May 13 00:20:16.562412 systemd[1]: Stopped target network.target. May 13 00:20:16.563726 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:20:16.563759 systemd[1]: Closed iscsiuio.socket. May 13 00:20:16.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.565019 systemd[1]: Stopping systemd-networkd.service... May 13 00:20:16.566084 systemd[1]: Stopping systemd-resolved.service... May 13 00:20:16.571424 systemd-networkd[739]: eth0: DHCPv6 lease lost May 13 00:20:16.576000 audit: BPF prog-id=9 op=UNLOAD May 13 00:20:16.572513 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:20:16.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.572602 systemd[1]: Stopped systemd-networkd.service. May 13 00:20:16.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.574028 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:20:16.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.574067 systemd[1]: Closed systemd-networkd.socket. May 13 00:20:16.575672 systemd[1]: Stopping network-cleanup.service... May 13 00:20:16.576747 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:20:16.576800 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:20:16.578667 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:20:16.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.578714 systemd[1]: Stopped systemd-sysctl.service. May 13 00:20:16.579900 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:20:16.579938 systemd[1]: Stopped systemd-modules-load.service. May 13 00:20:16.591000 audit: BPF prog-id=6 op=UNLOAD May 13 00:20:16.580908 systemd[1]: Stopping systemd-udevd.service... May 13 00:20:16.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.586700 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:20:16.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.587147 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:20:16.587247 systemd[1]: Stopped systemd-resolved.service. May 13 00:20:16.591141 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:20:16.591234 systemd[1]: Stopped network-cleanup.service. May 13 00:20:16.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.593317 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:20:16.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.593450 systemd[1]: Stopped systemd-udevd.service. May 13 00:20:16.595022 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:20:16.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.595056 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:20:16.597146 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:20:16.597182 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:20:16.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.598507 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:20:16.598555 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:20:16.601951 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:20:16.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.601999 systemd[1]: Stopped dracut-cmdline.service. May 13 00:20:16.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.603064 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:20:16.603098 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:20:16.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:16.605907 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:20:16.607257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:20:16.607311 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:20:16.611119 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:20:16.611202 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:20:16.612819 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:20:16.612900 systemd[1]: Stopped sysroot-boot.service. May 13 00:20:16.614162 systemd[1]: Reached target initrd-switch-root.target. May 13 00:20:16.623000 audit: BPF prog-id=5 op=UNLOAD May 13 00:20:16.623000 audit: BPF prog-id=4 op=UNLOAD May 13 00:20:16.623000 audit: BPF prog-id=3 op=UNLOAD May 13 00:20:16.615200 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:20:16.623000 audit: BPF prog-id=8 op=UNLOAD May 13 00:20:16.623000 audit: BPF prog-id=7 op=UNLOAD May 13 00:20:16.615246 systemd[1]: Stopped initrd-setup-root.service. May 13 00:20:16.617243 systemd[1]: Starting initrd-switch-root.service... May 13 00:20:16.622543 systemd[1]: Switching root. May 13 00:20:16.639878 systemd-journald[290]: Journal stopped May 13 00:20:18.644696 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 13 00:20:18.644753 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:20:18.644765 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:20:18.644775 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:20:18.644791 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:20:18.644800 kernel: SELinux: policy capability open_perms=1 May 13 00:20:18.644810 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:20:18.644820 kernel: SELinux: policy capability always_check_network=0 May 13 00:20:18.644830 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:20:18.644840 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:20:18.644853 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:20:18.644867 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:20:18.644879 systemd[1]: Successfully loaded SELinux policy in 36.793ms. May 13 00:20:18.644895 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.711ms. May 13 00:20:18.645086 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:20:18.645105 systemd[1]: Detected virtualization kvm. May 13 00:20:18.645116 systemd[1]: Detected architecture arm64. May 13 00:20:18.645127 systemd[1]: Detected first boot. May 13 00:20:18.645147 systemd[1]: Initializing machine ID from VM UUID. May 13 00:20:18.645159 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:20:18.645176 systemd[1]: Populated /etc with preset unit settings. May 13 00:20:18.645187 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:20:18.645198 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:20:18.645210 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:20:18.645222 systemd[1]: Queued start job for default target multi-user.target. May 13 00:20:18.645233 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:20:18.645244 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:20:18.645255 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:20:18.645265 systemd[1]: Created slice system-getty.slice. May 13 00:20:18.645275 systemd[1]: Created slice system-modprobe.slice. May 13 00:20:18.645286 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:20:18.645297 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:20:18.645308 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:20:18.645318 systemd[1]: Created slice user.slice. May 13 00:20:18.645329 systemd[1]: Started systemd-ask-password-console.path. May 13 00:20:18.645342 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:20:18.645352 systemd[1]: Set up automount boot.automount. May 13 00:20:18.645363 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:20:18.645391 systemd[1]: Reached target integritysetup.target. May 13 00:20:18.645404 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:20:18.645415 systemd[1]: Reached target remote-fs.target. May 13 00:20:18.645425 systemd[1]: Reached target slices.target. May 13 00:20:18.645436 systemd[1]: Reached target swap.target. May 13 00:20:18.645449 systemd[1]: Reached target torcx.target. May 13 00:20:18.645460 systemd[1]: Reached target veritysetup.target. May 13 00:20:18.645471 systemd[1]: Listening on systemd-coredump.socket. May 13 00:20:18.645482 systemd[1]: Listening on systemd-initctl.socket. May 13 00:20:18.645493 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:20:18.645503 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:20:18.645513 systemd[1]: Listening on systemd-journald.socket. May 13 00:20:18.645524 systemd[1]: Listening on systemd-networkd.socket. May 13 00:20:18.645534 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:20:18.645544 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:20:18.645555 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:20:18.645565 systemd[1]: Mounting dev-hugepages.mount... May 13 00:20:18.645577 systemd[1]: Mounting dev-mqueue.mount... May 13 00:20:18.645588 systemd[1]: Mounting media.mount... May 13 00:20:18.645598 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:20:18.645612 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:20:18.645622 systemd[1]: Mounting tmp.mount... May 13 00:20:18.645632 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:20:18.645643 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:20:18.645655 systemd[1]: Starting kmod-static-nodes.service... May 13 00:20:18.645665 systemd[1]: Starting modprobe@configfs.service... May 13 00:20:18.645675 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:20:18.645685 systemd[1]: Starting modprobe@drm.service... May 13 00:20:18.645695 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:20:18.645705 systemd[1]: Starting modprobe@fuse.service... May 13 00:20:18.645715 systemd[1]: Starting modprobe@loop.service... May 13 00:20:18.645725 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:20:18.645736 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 00:20:18.645748 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 13 00:20:18.645758 systemd[1]: Starting systemd-journald.service... May 13 00:20:18.645768 kernel: fuse: init (API version 7.34) May 13 00:20:18.645778 systemd[1]: Starting systemd-modules-load.service... May 13 00:20:18.645788 kernel: loop: module loaded May 13 00:20:18.645798 systemd[1]: Starting systemd-network-generator.service... May 13 00:20:18.645809 systemd[1]: Starting systemd-remount-fs.service... May 13 00:20:18.645819 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:20:18.645830 systemd[1]: Mounted dev-hugepages.mount. May 13 00:20:18.645842 systemd[1]: Mounted dev-mqueue.mount. May 13 00:20:18.645851 systemd[1]: Mounted media.mount. May 13 00:20:18.645863 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:20:18.645873 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:20:18.645884 systemd[1]: Mounted tmp.mount. May 13 00:20:18.645894 systemd[1]: Finished kmod-static-nodes.service. May 13 00:20:18.645904 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:20:18.645916 systemd-journald[1027]: Journal started May 13 00:20:18.645962 systemd-journald[1027]: Runtime Journal (/run/log/journal/020dcfe888384aa19a3138b6c405b614) is 6.0M, max 48.7M, 42.6M free. May 13 00:20:18.553000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:20:18.553000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 13 00:20:18.643000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:20:18.643000 audit[1027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc9038320 a2=4000 a3=1 items=0 ppid=1 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:20:18.643000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:20:18.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.646955 systemd[1]: Finished modprobe@configfs.service. May 13 00:20:18.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.649394 systemd[1]: Started systemd-journald.service. May 13 00:20:18.650048 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:18.650216 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:20:18.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.651138 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:20:18.651336 systemd[1]: Finished modprobe@drm.service. May 13 00:20:18.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.652402 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:20:18.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.653247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:18.653491 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:20:18.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.654803 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:20:18.654992 systemd[1]: Finished modprobe@fuse.service. May 13 00:20:18.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.655850 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:18.656046 systemd[1]: Finished modprobe@loop.service. May 13 00:20:18.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.657425 systemd[1]: Finished systemd-modules-load.service. May 13 00:20:18.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.658590 systemd[1]: Finished systemd-network-generator.service. May 13 00:20:18.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.659734 systemd[1]: Finished systemd-remount-fs.service. May 13 00:20:18.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.660895 systemd[1]: Reached target network-pre.target. May 13 00:20:18.662906 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:20:18.664598 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:20:18.665183 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:20:18.667229 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:20:18.669299 systemd[1]: Starting systemd-journal-flush.service... May 13 00:20:18.670128 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:20:18.671349 systemd[1]: Starting systemd-random-seed.service... May 13 00:20:18.672186 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:20:18.673264 systemd[1]: Starting systemd-sysctl.service... May 13 00:20:18.675261 systemd[1]: Starting systemd-sysusers.service... May 13 00:20:18.678422 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:20:18.679103 systemd-journald[1027]: Time spent on flushing to /var/log/journal/020dcfe888384aa19a3138b6c405b614 is 11.501ms for 939 entries. May 13 00:20:18.679103 systemd-journald[1027]: System Journal (/var/log/journal/020dcfe888384aa19a3138b6c405b614) is 8.0M, max 195.6M, 187.6M free. May 13 00:20:18.699491 systemd-journald[1027]: Received client request to flush runtime journal. May 13 00:20:18.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.681598 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:20:18.684912 systemd[1]: Finished systemd-random-seed.service. May 13 00:20:18.685861 systemd[1]: Reached target first-boot-complete.target. May 13 00:20:18.695051 systemd[1]: Finished systemd-sysctl.service. May 13 00:20:18.696172 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:20:18.698083 systemd[1]: Starting systemd-udev-settle.service... May 13 00:20:18.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.702548 systemd[1]: Finished systemd-journal-flush.service. May 13 00:20:18.705284 udevadm[1082]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:20:18.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:18.711228 systemd[1]: Finished systemd-sysusers.service. May 13 00:20:18.713148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:20:18.727497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:20:18.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.039979 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:20:19.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.041929 systemd[1]: Starting systemd-udevd.service... May 13 00:20:19.062672 systemd-udevd[1090]: Using default interface naming scheme 'v252'. May 13 00:20:19.075328 systemd[1]: Started systemd-udevd.service. May 13 00:20:19.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.077330 systemd[1]: Starting systemd-networkd.service... May 13 00:20:19.099292 systemd[1]: Starting systemd-userdbd.service... May 13 00:20:19.118630 systemd[1]: Found device dev-ttyAMA0.device. May 13 00:20:19.152900 systemd[1]: Started systemd-userdbd.service. May 13 00:20:19.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.159264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:20:19.193772 systemd[1]: Finished systemd-udev-settle.service. May 13 00:20:19.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.195759 systemd[1]: Starting lvm2-activation-early.service... May 13 00:20:19.203467 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:20:19.205273 systemd-networkd[1091]: lo: Link UP May 13 00:20:19.205283 systemd-networkd[1091]: lo: Gained carrier May 13 00:20:19.205631 systemd-networkd[1091]: Enumeration completed May 13 00:20:19.205730 systemd-networkd[1091]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:20:19.205739 systemd[1]: Started systemd-networkd.service. May 13 00:20:19.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.206855 systemd-networkd[1091]: eth0: Link UP May 13 00:20:19.206864 systemd-networkd[1091]: eth0: Gained carrier May 13 00:20:19.231487 systemd-networkd[1091]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:20:19.233176 systemd[1]: Finished lvm2-activation-early.service. May 13 00:20:19.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.234003 systemd[1]: Reached target cryptsetup.target. May 13 00:20:19.235728 systemd[1]: Starting lvm2-activation.service... May 13 00:20:19.239204 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:20:19.265184 systemd[1]: Finished lvm2-activation.service. May 13 00:20:19.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.265945 systemd[1]: Reached target local-fs-pre.target. May 13 00:20:19.266616 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:20:19.266642 systemd[1]: Reached target local-fs.target. May 13 00:20:19.267212 systemd[1]: Reached target machines.target. May 13 00:20:19.268932 systemd[1]: Starting ldconfig.service... May 13 00:20:19.269760 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:20:19.269808 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:20:19.270895 systemd[1]: Starting systemd-boot-update.service... May 13 00:20:19.272617 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:20:19.274637 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:20:19.276676 systemd[1]: Starting systemd-sysext.service... May 13 00:20:19.279936 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) May 13 00:20:19.280989 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:20:19.290815 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:20:19.292013 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:20:19.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.297876 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:20:19.299088 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:20:19.339559 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:20:19.340394 kernel: loop0: detected capacity change from 0 to 194096 May 13 00:20:19.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.352390 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:20:19.364524 systemd-fsck[1139]: fsck.fat 4.2 (2021-01-31) May 13 00:20:19.364524 systemd-fsck[1139]: /dev/vda1: 236 files, 117310/258078 clusters May 13 00:20:19.366614 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:20:19.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.379388 kernel: loop1: detected capacity change from 0 to 194096 May 13 00:20:19.383806 (sd-sysext)[1147]: Using extensions 'kubernetes'. May 13 00:20:19.384108 (sd-sysext)[1147]: Merged extensions into '/usr'. May 13 00:20:19.399133 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:20:19.400252 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:20:19.401922 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:20:19.403550 systemd[1]: Starting modprobe@loop.service... May 13 00:20:19.404184 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:20:19.404309 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:20:19.405069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:19.405200 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:20:19.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.406362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:19.406515 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:20:19.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.407681 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:19.407822 systemd[1]: Finished modprobe@loop.service. May 13 00:20:19.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.409266 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:20:19.409576 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:20:19.451804 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:20:19.455888 systemd[1]: Finished ldconfig.service. May 13 00:20:19.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.634873 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:20:19.636499 systemd[1]: Mounting boot.mount... May 13 00:20:19.638133 systemd[1]: Mounting usr-share-oem.mount... May 13 00:20:19.644267 systemd[1]: Mounted boot.mount. May 13 00:20:19.645058 systemd[1]: Mounted usr-share-oem.mount. May 13 00:20:19.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.647058 systemd[1]: Finished systemd-sysext.service. May 13 00:20:19.649408 systemd[1]: Starting ensure-sysext.service... May 13 00:20:19.650991 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:20:19.652108 systemd[1]: Finished systemd-boot-update.service. May 13 00:20:19.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.656015 systemd[1]: Reloading. May 13 00:20:19.659988 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:20:19.661015 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:20:19.662258 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:20:19.688737 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-05-13T00:20:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:20:19.689020 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-05-13T00:20:19Z" level=info msg="torcx already run" May 13 00:20:19.748658 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:20:19.748678 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:20:19.763659 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:20:19.808355 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:20:19.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.811796 systemd[1]: Starting audit-rules.service... May 13 00:20:19.813352 systemd[1]: Starting clean-ca-certificates.service... May 13 00:20:19.815100 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:20:19.817426 systemd[1]: Starting systemd-resolved.service... May 13 00:20:19.819521 systemd[1]: Starting systemd-timesyncd.service... May 13 00:20:19.821196 systemd[1]: Starting systemd-update-utmp.service... May 13 00:20:19.822710 systemd[1]: Finished clean-ca-certificates.service. May 13 00:20:19.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.827935 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:20:19.829200 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:20:19.830878 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:20:19.832000 audit[1238]: SYSTEM_BOOT pid=1238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:20:19.833748 systemd[1]: Starting modprobe@loop.service... May 13 00:20:19.834338 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:20:19.834482 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:20:19.834574 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:20:19.835441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:19.835591 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:20:19.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.836681 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:19.836818 systemd[1]: Finished modprobe@loop.service. May 13 00:20:19.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.839332 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:20:19.841072 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:20:19.842260 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:20:19.844223 systemd[1]: Starting modprobe@loop.service... May 13 00:20:19.844966 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:20:19.845136 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:20:19.845272 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:20:19.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.846216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:19.846544 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:20:19.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.847626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:19.847759 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:20:19.851351 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:20:19.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.852585 systemd[1]: Finished systemd-update-utmp.service. May 13 00:20:19.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.853593 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:19.853746 systemd[1]: Finished modprobe@loop.service. May 13 00:20:19.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.855484 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:20:19.856614 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:20:19.858563 systemd[1]: Starting modprobe@drm.service... May 13 00:20:19.860144 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:20:19.861032 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:20:19.861180 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:20:19.862470 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:20:19.864590 systemd[1]: Starting systemd-update-done.service... May 13 00:20:19.865256 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:20:19.866392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:19.866536 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:20:19.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.867513 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:20:19.867644 systemd[1]: Finished modprobe@drm.service. May 13 00:20:19.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.868616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:19.868747 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:20:19.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.869953 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:20:19.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.870059 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:20:19.871353 systemd[1]: Finished ensure-sysext.service. May 13 00:20:19.876362 systemd[1]: Finished systemd-update-done.service. May 13 00:20:19.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:20:19.889000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:20:19.889000 audit[1277]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc822e580 a2=420 a3=0 items=0 ppid=1231 pid=1277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:20:19.889000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:20:19.890174 augenrules[1277]: No rules May 13 00:20:19.891024 systemd[1]: Finished audit-rules.service. May 13 00:20:19.905120 systemd[1]: Started systemd-timesyncd.service. May 13 00:20:19.905507 systemd-resolved[1236]: Positive Trust Anchors: May 13 00:20:19.905729 systemd-resolved[1236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:20:19.905815 systemd-timesyncd[1237]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:20:19.905817 systemd-resolved[1236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:20:19.905869 systemd-timesyncd[1237]: Initial clock synchronization to Tue 2025-05-13 00:20:19.835882 UTC. May 13 00:20:19.906142 systemd[1]: Reached target time-set.target. May 13 00:20:19.916037 systemd-resolved[1236]: Defaulting to hostname 'linux'. May 13 00:20:19.919239 systemd[1]: Started systemd-resolved.service. May 13 00:20:19.919913 systemd[1]: Reached target network.target. May 13 00:20:19.920480 systemd[1]: Reached target nss-lookup.target. May 13 00:20:19.921041 systemd[1]: Reached target sysinit.target. May 13 00:20:19.921678 systemd[1]: Started motdgen.path. May 13 00:20:19.922203 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:20:19.923192 systemd[1]: Started logrotate.timer. May 13 00:20:19.923856 systemd[1]: Started mdadm.timer. May 13 00:20:19.924347 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:20:19.924963 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:20:19.924994 systemd[1]: Reached target paths.target. May 13 00:20:19.925529 systemd[1]: Reached target timers.target. May 13 00:20:19.926344 systemd[1]: Listening on dbus.socket. May 13 00:20:19.927975 systemd[1]: Starting docker.socket... May 13 00:20:19.929526 systemd[1]: Listening on sshd.socket. May 13 00:20:19.930154 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:20:19.930473 systemd[1]: Listening on docker.socket. May 13 00:20:19.931053 systemd[1]: Reached target sockets.target. May 13 00:20:19.931647 systemd[1]: Reached target basic.target. May 13 00:20:19.932302 systemd[1]: System is tainted: cgroupsv1 May 13 00:20:19.932346 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:20:19.932387 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:20:19.933311 systemd[1]: Starting containerd.service... May 13 00:20:19.934938 systemd[1]: Starting dbus.service... May 13 00:20:19.936432 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:20:19.938048 systemd[1]: Starting extend-filesystems.service... May 13 00:20:19.938706 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:20:19.939890 systemd[1]: Starting motdgen.service... May 13 00:20:19.941568 systemd[1]: Starting prepare-helm.service... May 13 00:20:19.943278 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:20:19.945448 systemd[1]: Starting sshd-keygen.service... May 13 00:20:19.948031 jq[1289]: false May 13 00:20:19.947889 systemd[1]: Starting systemd-logind.service... May 13 00:20:19.952182 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:20:19.952255 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:20:19.953433 systemd[1]: Starting update-engine.service... May 13 00:20:19.955089 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:20:19.957221 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:20:19.958714 jq[1310]: true May 13 00:20:19.957487 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:20:19.958442 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:20:19.958682 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:20:19.969200 jq[1315]: true May 13 00:20:19.978446 tar[1312]: linux-arm64/helm May 13 00:20:19.983760 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:20:19.984013 systemd[1]: Finished motdgen.service. May 13 00:20:19.991780 extend-filesystems[1290]: Found loop1 May 13 00:20:19.991780 extend-filesystems[1290]: Found vda May 13 00:20:19.991780 extend-filesystems[1290]: Found vda1 May 13 00:20:19.991780 extend-filesystems[1290]: Found vda2 May 13 00:20:19.991780 extend-filesystems[1290]: Found vda3 May 13 00:20:19.991780 extend-filesystems[1290]: Found usr May 13 00:20:19.991780 extend-filesystems[1290]: Found vda4 May 13 00:20:19.991780 extend-filesystems[1290]: Found vda6 May 13 00:20:19.991780 extend-filesystems[1290]: Found vda7 May 13 00:20:19.991780 extend-filesystems[1290]: Found vda9 May 13 00:20:19.991780 extend-filesystems[1290]: Checking size of /dev/vda9 May 13 00:20:19.999017 dbus-daemon[1288]: [system] SELinux support is enabled May 13 00:20:19.999221 systemd[1]: Started dbus.service. May 13 00:20:20.001670 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:20:20.001702 systemd[1]: Reached target system-config.target. May 13 00:20:20.002434 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:20:20.002461 systemd[1]: Reached target user-config.target. May 13 00:20:20.017760 extend-filesystems[1290]: Resized partition /dev/vda9 May 13 00:20:20.025382 extend-filesystems[1345]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:20:20.033383 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:20:20.039308 systemd-logind[1299]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:20:20.039568 systemd-logind[1299]: New seat seat0. May 13 00:20:20.046174 systemd[1]: Started systemd-logind.service. May 13 00:20:20.060401 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:20:20.075074 extend-filesystems[1345]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:20:20.075074 extend-filesystems[1345]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:20:20.075074 extend-filesystems[1345]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:20:20.079480 extend-filesystems[1290]: Resized filesystem in /dev/vda9 May 13 00:20:20.080289 bash[1344]: Updated "/home/core/.ssh/authorized_keys" May 13 00:20:20.077045 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:20:20.077291 systemd[1]: Finished extend-filesystems.service. May 13 00:20:20.080524 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:20:20.091483 update_engine[1305]: I0513 00:20:20.091176 1305 main.cc:92] Flatcar Update Engine starting May 13 00:20:20.094559 update_engine[1305]: I0513 00:20:20.094529 1305 update_check_scheduler.cc:74] Next update check in 5m19s May 13 00:20:20.097131 systemd[1]: Started update-engine.service. May 13 00:20:20.099546 systemd[1]: Started locksmithd.service. May 13 00:20:20.106755 env[1314]: time="2025-05-13T00:20:20.106700344Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:20:20.130891 env[1314]: time="2025-05-13T00:20:20.130833275Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:20:20.131012 env[1314]: time="2025-05-13T00:20:20.130996199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:20.135159 env[1314]: time="2025-05-13T00:20:20.132227275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:20.135159 env[1314]: time="2025-05-13T00:20:20.132258638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:20.135159 env[1314]: time="2025-05-13T00:20:20.132516923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:20.135159 env[1314]: time="2025-05-13T00:20:20.132534647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:20:20.135159 env[1314]: time="2025-05-13T00:20:20.132547335Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:20:20.135159 env[1314]: time="2025-05-13T00:20:20.132556891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:20:20.135159 env[1314]: time="2025-05-13T00:20:20.132624377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:20.135159 env[1314]: time="2025-05-13T00:20:20.132821480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:20.135159 env[1314]: time="2025-05-13T00:20:20.132953676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:20.135159 env[1314]: time="2025-05-13T00:20:20.132969179Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:20:20.135675 env[1314]: time="2025-05-13T00:20:20.133015134Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:20:20.135675 env[1314]: time="2025-05-13T00:20:20.133026157Z" level=info msg="metadata content store policy set" policy=shared May 13 00:20:20.142184 env[1314]: time="2025-05-13T00:20:20.142113664Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:20:20.142184 env[1314]: time="2025-05-13T00:20:20.142149826Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:20:20.142184 env[1314]: time="2025-05-13T00:20:20.142163029Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:20:20.142282 env[1314]: time="2025-05-13T00:20:20.142192014Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:20:20.142282 env[1314]: time="2025-05-13T00:20:20.142206288Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:20:20.142282 env[1314]: time="2025-05-13T00:20:20.142221276Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:20:20.142282 env[1314]: time="2025-05-13T00:20:20.142235194Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:20:20.142611 env[1314]: time="2025-05-13T00:20:20.142587094Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:20:20.142611 env[1314]: time="2025-05-13T00:20:20.142609615Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:20:20.142672 env[1314]: time="2025-05-13T00:20:20.142623771Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:20:20.142672 env[1314]: time="2025-05-13T00:20:20.142635864Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:20:20.142672 env[1314]: time="2025-05-13T00:20:20.142647759Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:20:20.142765 env[1314]: time="2025-05-13T00:20:20.142745697Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:20:20.142838 env[1314]: time="2025-05-13T00:20:20.142820042Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:20:20.143143 env[1314]: time="2025-05-13T00:20:20.143114250Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:20:20.143183 env[1314]: time="2025-05-13T00:20:20.143153266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143183 env[1314]: time="2025-05-13T00:20:20.143167461Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:20:20.143430 env[1314]: time="2025-05-13T00:20:20.143362820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143430 env[1314]: time="2025-05-13T00:20:20.143389307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143430 env[1314]: time="2025-05-13T00:20:20.143402035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143430 env[1314]: time="2025-05-13T00:20:20.143413494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143430 env[1314]: time="2025-05-13T00:20:20.143426142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143537 env[1314]: time="2025-05-13T00:20:20.143438236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143537 env[1314]: time="2025-05-13T00:20:20.143449378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143537 env[1314]: time="2025-05-13T00:20:20.143460401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143537 env[1314]: time="2025-05-13T00:20:20.143472930Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:20:20.143623 env[1314]: time="2025-05-13T00:20:20.143584309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143623 env[1314]: time="2025-05-13T00:20:20.143599138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143623 env[1314]: time="2025-05-13T00:20:20.143612342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143681 env[1314]: time="2025-05-13T00:20:20.143623881Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:20:20.143681 env[1314]: time="2025-05-13T00:20:20.143637124Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:20:20.143681 env[1314]: time="2025-05-13T00:20:20.143647592Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:20:20.143681 env[1314]: time="2025-05-13T00:20:20.143664047Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:20:20.143758 env[1314]: time="2025-05-13T00:20:20.143695490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:20:20.143944 env[1314]: time="2025-05-13T00:20:20.143889342Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:20:20.144766 env[1314]: time="2025-05-13T00:20:20.143952625Z" level=info msg="Connect containerd service" May 13 00:20:20.144766 env[1314]: time="2025-05-13T00:20:20.143980658Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:20:20.144766 env[1314]: time="2025-05-13T00:20:20.144682118Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:20:20.146812 env[1314]: time="2025-05-13T00:20:20.144922917Z" level=info msg="Start subscribing containerd event" May 13 00:20:20.146812 env[1314]: time="2025-05-13T00:20:20.144981085Z" level=info msg="Start recovering state" May 13 00:20:20.146812 env[1314]: time="2025-05-13T00:20:20.145039332Z" level=info msg="Start event monitor" May 13 00:20:20.146812 env[1314]: time="2025-05-13T00:20:20.145060664Z" level=info msg="Start snapshots syncer" May 13 00:20:20.146812 env[1314]: time="2025-05-13T00:20:20.145062726Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:20:20.146812 env[1314]: time="2025-05-13T00:20:20.145111337Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:20:20.146812 env[1314]: time="2025-05-13T00:20:20.145070973Z" level=info msg="Start cni network conf syncer for default" May 13 00:20:20.146812 env[1314]: time="2025-05-13T00:20:20.145140639Z" level=info msg="Start streaming server" May 13 00:20:20.145246 systemd[1]: Started containerd.service. May 13 00:20:20.147420 env[1314]: time="2025-05-13T00:20:20.147382094Z" level=info msg="containerd successfully booted in 0.041383s" May 13 00:20:20.165696 locksmithd[1352]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:20:20.372744 tar[1312]: linux-arm64/LICENSE May 13 00:20:20.372872 tar[1312]: linux-arm64/README.md May 13 00:20:20.376785 systemd[1]: Finished prepare-helm.service. May 13 00:20:20.576546 systemd-networkd[1091]: eth0: Gained IPv6LL May 13 00:20:20.578230 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:20:20.579296 systemd[1]: Reached target network-online.target. May 13 00:20:20.581546 systemd[1]: Starting kubelet.service... May 13 00:20:21.076952 systemd[1]: Started kubelet.service. May 13 00:20:21.581582 kubelet[1373]: E0513 00:20:21.581537 1373 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:20:21.583518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:20:21.583659 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:20:21.739911 sshd_keygen[1318]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:20:21.757327 systemd[1]: Finished sshd-keygen.service. May 13 00:20:21.759435 systemd[1]: Starting issuegen.service... May 13 00:20:21.763934 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:20:21.764134 systemd[1]: Finished issuegen.service. May 13 00:20:21.766096 systemd[1]: Starting systemd-user-sessions.service... May 13 00:20:21.771543 systemd[1]: Finished systemd-user-sessions.service. May 13 00:20:21.773700 systemd[1]: Started getty@tty1.service. May 13 00:20:21.775841 systemd[1]: Started serial-getty@ttyAMA0.service. May 13 00:20:21.776845 systemd[1]: Reached target getty.target. May 13 00:20:21.777786 systemd[1]: Reached target multi-user.target. May 13 00:20:21.779870 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:20:21.786374 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:20:21.786708 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:20:21.787748 systemd[1]: Startup finished in 6.710s (kernel) + 5.092s (userspace) = 11.802s. May 13 00:20:23.486763 systemd[1]: Created slice system-sshd.slice. May 13 00:20:23.488613 systemd[1]: Started sshd@0-10.0.0.41:22-10.0.0.1:43932.service. May 13 00:20:23.533538 sshd[1400]: Accepted publickey for core from 10.0.0.1 port 43932 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:20:23.535528 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:20:23.544650 systemd-logind[1299]: New session 1 of user core. May 13 00:20:23.546681 systemd[1]: Created slice user-500.slice. May 13 00:20:23.547975 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:20:23.557155 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:20:23.558585 systemd[1]: Starting user@500.service... May 13 00:20:23.561569 (systemd)[1405]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:20:23.621207 systemd[1405]: Queued start job for default target default.target. May 13 00:20:23.621465 systemd[1405]: Reached target paths.target. May 13 00:20:23.621481 systemd[1405]: Reached target sockets.target. May 13 00:20:23.621492 systemd[1405]: Reached target timers.target. May 13 00:20:23.621502 systemd[1405]: Reached target basic.target. May 13 00:20:23.621546 systemd[1405]: Reached target default.target. May 13 00:20:23.621566 systemd[1405]: Startup finished in 54ms. May 13 00:20:23.621840 systemd[1]: Started user@500.service. May 13 00:20:23.622790 systemd[1]: Started session-1.scope. May 13 00:20:23.672871 systemd[1]: Started sshd@1-10.0.0.41:22-10.0.0.1:43940.service. May 13 00:20:23.707140 sshd[1414]: Accepted publickey for core from 10.0.0.1 port 43940 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:20:23.708754 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:20:23.712422 systemd-logind[1299]: New session 2 of user core. May 13 00:20:23.712871 systemd[1]: Started session-2.scope. May 13 00:20:23.765419 sshd[1414]: pam_unix(sshd:session): session closed for user core May 13 00:20:23.767801 systemd[1]: Started sshd@2-10.0.0.41:22-10.0.0.1:43956.service. May 13 00:20:23.768277 systemd[1]: sshd@1-10.0.0.41:22-10.0.0.1:43940.service: Deactivated successfully. May 13 00:20:23.769083 systemd-logind[1299]: Session 2 logged out. Waiting for processes to exit. May 13 00:20:23.769137 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:20:23.770896 systemd-logind[1299]: Removed session 2. May 13 00:20:23.811671 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 43956 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:20:23.812819 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:20:23.816888 systemd-logind[1299]: New session 3 of user core. May 13 00:20:23.816977 systemd[1]: Started session-3.scope. May 13 00:20:23.865427 sshd[1419]: pam_unix(sshd:session): session closed for user core May 13 00:20:23.867597 systemd[1]: Started sshd@3-10.0.0.41:22-10.0.0.1:43958.service. May 13 00:20:23.868622 systemd[1]: sshd@2-10.0.0.41:22-10.0.0.1:43956.service: Deactivated successfully. May 13 00:20:23.869279 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:20:23.870421 systemd-logind[1299]: Session 3 logged out. Waiting for processes to exit. May 13 00:20:23.871126 systemd-logind[1299]: Removed session 3. May 13 00:20:23.901736 sshd[1426]: Accepted publickey for core from 10.0.0.1 port 43958 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:20:23.902827 sshd[1426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:20:23.906342 systemd-logind[1299]: New session 4 of user core. May 13 00:20:23.906756 systemd[1]: Started session-4.scope. May 13 00:20:23.960303 sshd[1426]: pam_unix(sshd:session): session closed for user core May 13 00:20:23.962574 systemd[1]: Started sshd@4-10.0.0.41:22-10.0.0.1:43968.service. May 13 00:20:23.963068 systemd[1]: sshd@3-10.0.0.41:22-10.0.0.1:43958.service: Deactivated successfully. May 13 00:20:23.963938 systemd-logind[1299]: Session 4 logged out. Waiting for processes to exit. May 13 00:20:23.963982 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:20:23.964827 systemd-logind[1299]: Removed session 4. May 13 00:20:23.995853 sshd[1433]: Accepted publickey for core from 10.0.0.1 port 43968 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:20:23.997468 sshd[1433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:20:24.000865 systemd-logind[1299]: New session 5 of user core. May 13 00:20:24.001700 systemd[1]: Started session-5.scope. May 13 00:20:24.063063 sudo[1439]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:20:24.063288 sudo[1439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:20:24.139202 systemd[1]: Starting docker.service... May 13 00:20:24.268474 env[1451]: time="2025-05-13T00:20:24.268419171Z" level=info msg="Starting up" May 13 00:20:24.270413 env[1451]: time="2025-05-13T00:20:24.270385734Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:20:24.270509 env[1451]: time="2025-05-13T00:20:24.270495568Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:20:24.270575 env[1451]: time="2025-05-13T00:20:24.270558007Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:20:24.270644 env[1451]: time="2025-05-13T00:20:24.270630713Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:20:24.273512 env[1451]: time="2025-05-13T00:20:24.273473129Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:20:24.273512 env[1451]: time="2025-05-13T00:20:24.273503413Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:20:24.273616 env[1451]: time="2025-05-13T00:20:24.273519729Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:20:24.273616 env[1451]: time="2025-05-13T00:20:24.273529041Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:20:24.443117 env[1451]: time="2025-05-13T00:20:24.443027374Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 13 00:20:24.443582 env[1451]: time="2025-05-13T00:20:24.443558601Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 13 00:20:24.443876 env[1451]: time="2025-05-13T00:20:24.443852250Z" level=info msg="Loading containers: start." May 13 00:20:24.561390 kernel: Initializing XFRM netlink socket May 13 00:20:24.586026 env[1451]: time="2025-05-13T00:20:24.585975268Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:20:24.649195 systemd-networkd[1091]: docker0: Link UP May 13 00:20:24.668937 env[1451]: time="2025-05-13T00:20:24.668893574Z" level=info msg="Loading containers: done." May 13 00:20:24.696584 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2552371621-merged.mount: Deactivated successfully. May 13 00:20:24.700847 env[1451]: time="2025-05-13T00:20:24.700800454Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:20:24.701005 env[1451]: time="2025-05-13T00:20:24.700986974Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:20:24.701090 env[1451]: time="2025-05-13T00:20:24.701076553Z" level=info msg="Daemon has completed initialization" May 13 00:20:24.718269 systemd[1]: Started docker.service. May 13 00:20:24.721371 env[1451]: time="2025-05-13T00:20:24.721201690Z" level=info msg="API listen on /run/docker.sock" May 13 00:20:25.344742 env[1314]: time="2025-05-13T00:20:25.344683352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:20:25.970744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount820677180.mount: Deactivated successfully. May 13 00:20:27.247775 env[1314]: time="2025-05-13T00:20:27.247700132Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:27.249674 env[1314]: time="2025-05-13T00:20:27.249632643Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:27.251287 env[1314]: time="2025-05-13T00:20:27.251259126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:27.253676 env[1314]: time="2025-05-13T00:20:27.253636187Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:27.254398 env[1314]: time="2025-05-13T00:20:27.254362568Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 00:20:27.263284 env[1314]: time="2025-05-13T00:20:27.263250983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:20:28.988888 env[1314]: time="2025-05-13T00:20:28.988841166Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:28.990171 env[1314]: time="2025-05-13T00:20:28.990134354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:28.991724 env[1314]: time="2025-05-13T00:20:28.991697251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:28.993435 env[1314]: time="2025-05-13T00:20:28.993408541Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:28.994285 env[1314]: time="2025-05-13T00:20:28.994259540Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 00:20:29.002953 env[1314]: time="2025-05-13T00:20:29.002928440Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:20:30.148913 env[1314]: time="2025-05-13T00:20:30.148866585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:30.150514 env[1314]: time="2025-05-13T00:20:30.150481427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:30.152750 env[1314]: time="2025-05-13T00:20:30.152722866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:30.154158 env[1314]: time="2025-05-13T00:20:30.154135573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:30.155759 env[1314]: time="2025-05-13T00:20:30.155729224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 00:20:30.166329 env[1314]: time="2025-05-13T00:20:30.166296173Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:20:31.341037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913023733.mount: Deactivated successfully. May 13 00:20:31.763292 env[1314]: time="2025-05-13T00:20:31.763166646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:31.765215 env[1314]: time="2025-05-13T00:20:31.765189173Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:31.767427 env[1314]: time="2025-05-13T00:20:31.767404631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:31.769226 env[1314]: time="2025-05-13T00:20:31.769184326Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:31.769818 env[1314]: time="2025-05-13T00:20:31.769788429Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 00:20:31.775173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:20:31.775326 systemd[1]: Stopped kubelet.service. May 13 00:20:31.776740 systemd[1]: Starting kubelet.service... May 13 00:20:31.779060 env[1314]: time="2025-05-13T00:20:31.779030096Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:20:31.854865 systemd[1]: Started kubelet.service. May 13 00:20:31.973463 kubelet[1624]: E0513 00:20:31.973416 1624 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:20:31.976016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:20:31.976167 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:20:32.502024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2618282511.mount: Deactivated successfully. May 13 00:20:33.432770 env[1314]: time="2025-05-13T00:20:33.432713737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:33.434329 env[1314]: time="2025-05-13T00:20:33.434291745Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:33.436075 env[1314]: time="2025-05-13T00:20:33.436046480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:33.437768 env[1314]: time="2025-05-13T00:20:33.437741468Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:33.438585 env[1314]: time="2025-05-13T00:20:33.438548424Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 00:20:33.447203 env[1314]: time="2025-05-13T00:20:33.447161988Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:20:33.915602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130815692.mount: Deactivated successfully. May 13 00:20:33.922533 env[1314]: time="2025-05-13T00:20:33.922488550Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:33.924101 env[1314]: time="2025-05-13T00:20:33.924062684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:33.925092 env[1314]: time="2025-05-13T00:20:33.925067655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:33.926924 env[1314]: time="2025-05-13T00:20:33.926886811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:33.927616 env[1314]: time="2025-05-13T00:20:33.927587331Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 00:20:33.937184 env[1314]: time="2025-05-13T00:20:33.937156941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:20:34.424602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3848408196.mount: Deactivated successfully. May 13 00:20:36.603640 env[1314]: time="2025-05-13T00:20:36.603556965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:36.605840 env[1314]: time="2025-05-13T00:20:36.605811479Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:36.608316 env[1314]: time="2025-05-13T00:20:36.608287004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:36.609990 env[1314]: time="2025-05-13T00:20:36.609954203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:36.610881 env[1314]: time="2025-05-13T00:20:36.610852836Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 00:20:42.025204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:20:42.025388 systemd[1]: Stopped kubelet.service. May 13 00:20:42.026857 systemd[1]: Starting kubelet.service... May 13 00:20:42.113864 systemd[1]: Started kubelet.service. May 13 00:20:42.156576 kubelet[1726]: E0513 00:20:42.156538 1726 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:20:42.158355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:20:42.158502 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:20:43.410853 systemd[1]: Stopped kubelet.service. May 13 00:20:43.413405 systemd[1]: Starting kubelet.service... May 13 00:20:43.437554 systemd[1]: Reloading. May 13 00:20:43.493846 /usr/lib/systemd/system-generators/torcx-generator[1763]: time="2025-05-13T00:20:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:20:43.494212 /usr/lib/systemd/system-generators/torcx-generator[1763]: time="2025-05-13T00:20:43Z" level=info msg="torcx already run" May 13 00:20:43.625614 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:20:43.625635 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:20:43.641297 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:20:43.709945 systemd[1]: Started kubelet.service. May 13 00:20:43.711265 systemd[1]: Stopping kubelet.service... May 13 00:20:43.711549 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:20:43.711773 systemd[1]: Stopped kubelet.service. May 13 00:20:43.713227 systemd[1]: Starting kubelet.service... May 13 00:20:43.791467 systemd[1]: Started kubelet.service. May 13 00:20:43.838234 kubelet[1822]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:20:43.838234 kubelet[1822]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:20:43.838234 kubelet[1822]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:20:43.839206 kubelet[1822]: I0513 00:20:43.839152 1822 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:20:44.947997 kubelet[1822]: I0513 00:20:44.947950 1822 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:20:44.947997 kubelet[1822]: I0513 00:20:44.947987 1822 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:20:44.948328 kubelet[1822]: I0513 00:20:44.948184 1822 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:20:44.985729 kubelet[1822]: E0513 00:20:44.985698 1822 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:44.985837 kubelet[1822]: I0513 00:20:44.985758 1822 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:20:44.993551 kubelet[1822]: I0513 00:20:44.993500 1822 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:20:44.994854 kubelet[1822]: I0513 00:20:44.994818 1822 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:20:44.995020 kubelet[1822]: I0513 00:20:44.994862 1822 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:20:44.995103 kubelet[1822]: I0513 00:20:44.995094 1822 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:20:44.995133 kubelet[1822]: I0513 00:20:44.995104 1822 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:20:44.995358 kubelet[1822]: I0513 00:20:44.995344 1822 state_mem.go:36] "Initialized new in-memory state store" May 13 00:20:44.996506 kubelet[1822]: I0513 00:20:44.996469 1822 kubelet.go:400] "Attempting to sync node with API server" May 13 00:20:44.996506 kubelet[1822]: I0513 00:20:44.996498 1822 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:20:44.996803 kubelet[1822]: I0513 00:20:44.996791 1822 kubelet.go:312] "Adding apiserver pod source" May 13 00:20:44.996889 kubelet[1822]: I0513 00:20:44.996874 1822 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:20:44.996989 kubelet[1822]: W0513 00:20:44.996948 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:44.997032 kubelet[1822]: E0513 00:20:44.996999 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:44.997304 kubelet[1822]: W0513 00:20:44.997266 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:44.997346 kubelet[1822]: E0513 00:20:44.997310 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:44.999745 kubelet[1822]: I0513 00:20:44.999721 1822 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:20:45.002326 kubelet[1822]: I0513 00:20:45.002293 1822 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:20:45.002428 kubelet[1822]: W0513 00:20:45.002413 1822 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:20:45.003166 kubelet[1822]: I0513 00:20:45.003146 1822 server.go:1264] "Started kubelet" May 13 00:20:45.003733 kubelet[1822]: I0513 00:20:45.003682 1822 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:20:45.003989 kubelet[1822]: I0513 00:20:45.003967 1822 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:20:45.004032 kubelet[1822]: I0513 00:20:45.004006 1822 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:20:45.005249 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:20:45.005433 kubelet[1822]: I0513 00:20:45.005413 1822 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:20:45.010892 kubelet[1822]: I0513 00:20:45.010839 1822 server.go:455] "Adding debug handlers to kubelet server" May 13 00:20:45.011042 kubelet[1822]: E0513 00:20:45.008723 1822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.41:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.41:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee3b8d0b1685 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:20:45.003126405 +0000 UTC m=+1.208552369,LastTimestamp:2025-05-13 00:20:45.003126405 +0000 UTC m=+1.208552369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:20:45.011473 kubelet[1822]: E0513 00:20:45.011446 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:20:45.011666 kubelet[1822]: I0513 00:20:45.011652 1822 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:20:45.011826 kubelet[1822]: I0513 00:20:45.011811 1822 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:20:45.011988 kubelet[1822]: I0513 00:20:45.011979 1822 reconciler.go:26] "Reconciler: start to sync state" May 13 00:20:45.012502 kubelet[1822]: W0513 00:20:45.012453 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:45.012574 kubelet[1822]: E0513 00:20:45.012506 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:45.012672 kubelet[1822]: E0513 00:20:45.012648 1822 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:20:45.013009 kubelet[1822]: E0513 00:20:45.012981 1822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="200ms" May 13 00:20:45.013174 kubelet[1822]: I0513 00:20:45.013148 1822 factory.go:221] Registration of the systemd container factory successfully May 13 00:20:45.013252 kubelet[1822]: I0513 00:20:45.013234 1822 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:20:45.014484 kubelet[1822]: I0513 00:20:45.014456 1822 factory.go:221] Registration of the containerd container factory successfully May 13 00:20:45.032037 kubelet[1822]: I0513 00:20:45.032000 1822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:20:45.033328 kubelet[1822]: I0513 00:20:45.033311 1822 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:20:45.033470 kubelet[1822]: I0513 00:20:45.033448 1822 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:20:45.033546 kubelet[1822]: I0513 00:20:45.033537 1822 state_mem.go:36] "Initialized new in-memory state store" May 13 00:20:45.033599 kubelet[1822]: I0513 00:20:45.033342 1822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:20:45.033755 kubelet[1822]: I0513 00:20:45.033735 1822 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:20:45.033755 kubelet[1822]: I0513 00:20:45.033756 1822 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:20:45.033826 kubelet[1822]: E0513 00:20:45.033797 1822 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:20:45.034263 kubelet[1822]: W0513 00:20:45.034202 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:45.034327 kubelet[1822]: E0513 00:20:45.034272 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:45.098287 kubelet[1822]: I0513 00:20:45.098257 1822 policy_none.go:49] "None policy: Start" May 13 00:20:45.099233 kubelet[1822]: I0513 00:20:45.099188 1822 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:20:45.099332 kubelet[1822]: I0513 00:20:45.099321 1822 state_mem.go:35] "Initializing new in-memory state store" May 13 00:20:45.104198 kubelet[1822]: I0513 00:20:45.104176 1822 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:20:45.104477 kubelet[1822]: I0513 00:20:45.104431 1822 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:20:45.104627 kubelet[1822]: I0513 00:20:45.104616 1822 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:20:45.105813 kubelet[1822]: E0513 00:20:45.105792 1822 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:20:45.112811 kubelet[1822]: I0513 00:20:45.112779 1822 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:20:45.113144 kubelet[1822]: E0513 00:20:45.113117 1822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" May 13 00:20:45.134333 kubelet[1822]: I0513 00:20:45.134285 1822 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:20:45.135192 kubelet[1822]: I0513 00:20:45.135168 1822 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:20:45.136033 kubelet[1822]: I0513 00:20:45.135944 1822 topology_manager.go:215] "Topology Admit Handler" podUID="aaedb0f7df1ff28abaadd7c002b516db" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:20:45.213852 kubelet[1822]: E0513 00:20:45.213769 1822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="400ms" May 13 00:20:45.313215 kubelet[1822]: I0513 00:20:45.313192 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:20:45.313351 kubelet[1822]: I0513 00:20:45.313333 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:20:45.313499 kubelet[1822]: I0513 00:20:45.313481 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaedb0f7df1ff28abaadd7c002b516db-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaedb0f7df1ff28abaadd7c002b516db\") " pod="kube-system/kube-apiserver-localhost" May 13 00:20:45.313606 kubelet[1822]: I0513 00:20:45.313591 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaedb0f7df1ff28abaadd7c002b516db-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaedb0f7df1ff28abaadd7c002b516db\") " pod="kube-system/kube-apiserver-localhost" May 13 00:20:45.313707 kubelet[1822]: I0513 00:20:45.313694 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:20:45.313806 kubelet[1822]: I0513 00:20:45.313792 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:20:45.313888 kubelet[1822]: I0513 00:20:45.313875 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:20:45.313981 kubelet[1822]: I0513 00:20:45.313969 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:20:45.314075 kubelet[1822]: I0513 00:20:45.314062 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaedb0f7df1ff28abaadd7c002b516db-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aaedb0f7df1ff28abaadd7c002b516db\") " pod="kube-system/kube-apiserver-localhost" May 13 00:20:45.314340 kubelet[1822]: I0513 00:20:45.314319 1822 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:20:45.314671 kubelet[1822]: E0513 00:20:45.314635 1822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" May 13 00:20:45.438959 kubelet[1822]: E0513 00:20:45.438936 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:45.439519 env[1314]: time="2025-05-13T00:20:45.439481545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:20:45.441668 kubelet[1822]: E0513 00:20:45.441647 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:45.441721 kubelet[1822]: E0513 00:20:45.441650 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:45.442025 env[1314]: time="2025-05-13T00:20:45.441996605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aaedb0f7df1ff28abaadd7c002b516db,Namespace:kube-system,Attempt:0,}" May 13 00:20:45.442413 env[1314]: time="2025-05-13T00:20:45.442387004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:20:45.614785 kubelet[1822]: E0513 00:20:45.614736 1822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="800ms" May 13 00:20:45.716314 kubelet[1822]: I0513 00:20:45.716285 1822 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:20:45.716627 kubelet[1822]: E0513 00:20:45.716596 1822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" May 13 00:20:45.945183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945992668.mount: Deactivated successfully. May 13 00:20:45.949143 env[1314]: time="2025-05-13T00:20:45.949100967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.954037 env[1314]: time="2025-05-13T00:20:45.954006366Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.955654 env[1314]: time="2025-05-13T00:20:45.955616627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.956424 env[1314]: time="2025-05-13T00:20:45.956401064Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.957098 env[1314]: time="2025-05-13T00:20:45.957077694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.957955 env[1314]: time="2025-05-13T00:20:45.957912195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.958802 env[1314]: time="2025-05-13T00:20:45.958761172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.961536 env[1314]: time="2025-05-13T00:20:45.961509680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.963435 env[1314]: time="2025-05-13T00:20:45.963387217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.965681 env[1314]: time="2025-05-13T00:20:45.965651995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.967023 env[1314]: time="2025-05-13T00:20:45.967000057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.967737 env[1314]: time="2025-05-13T00:20:45.967712556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:20:45.988270 env[1314]: time="2025-05-13T00:20:45.988206762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:20:45.988270 env[1314]: time="2025-05-13T00:20:45.988244710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:20:45.988270 env[1314]: time="2025-05-13T00:20:45.988255147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:20:45.988548 env[1314]: time="2025-05-13T00:20:45.988508948Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/940ca191f217c5e8ae12975291fd9b458c332b8cab877bddb37e1b87a1d1d65d pid=1869 runtime=io.containerd.runc.v2 May 13 00:20:45.990015 env[1314]: time="2025-05-13T00:20:45.989658871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:20:45.990131 env[1314]: time="2025-05-13T00:20:45.989994647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:20:45.990216 env[1314]: time="2025-05-13T00:20:45.990186788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:20:45.990448 env[1314]: time="2025-05-13T00:20:45.990410118Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a92a8514bbefbe65eceed5ea93c355ae8b705d31a905448087b6f9258a64d497 pid=1879 runtime=io.containerd.runc.v2 May 13 00:20:45.991390 env[1314]: time="2025-05-13T00:20:45.990435710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:20:45.991693 env[1314]: time="2025-05-13T00:20:45.991644536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:20:45.991693 env[1314]: time="2025-05-13T00:20:45.991673367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:20:45.991931 env[1314]: time="2025-05-13T00:20:45.991885301Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0ecc66a472f7285da05232fe7fcf3a88b2633c392061fd60cee5fe5718622bc pid=1884 runtime=io.containerd.runc.v2 May 13 00:20:46.070547 env[1314]: time="2025-05-13T00:20:46.070504444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aaedb0f7df1ff28abaadd7c002b516db,Namespace:kube-system,Attempt:0,} returns sandbox id \"940ca191f217c5e8ae12975291fd9b458c332b8cab877bddb37e1b87a1d1d65d\"" May 13 00:20:46.071540 env[1314]: time="2025-05-13T00:20:46.071514450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a92a8514bbefbe65eceed5ea93c355ae8b705d31a905448087b6f9258a64d497\"" May 13 00:20:46.071764 kubelet[1822]: E0513 00:20:46.071731 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:46.072648 kubelet[1822]: E0513 00:20:46.072630 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:46.074102 env[1314]: time="2025-05-13T00:20:46.074062038Z" level=info msg="CreateContainer within sandbox \"940ca191f217c5e8ae12975291fd9b458c332b8cab877bddb37e1b87a1d1d65d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:20:46.075457 env[1314]: time="2025-05-13T00:20:46.075412192Z" level=info msg="CreateContainer within sandbox \"a92a8514bbefbe65eceed5ea93c355ae8b705d31a905448087b6f9258a64d497\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:20:46.080977 env[1314]: time="2025-05-13T00:20:46.080930775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0ecc66a472f7285da05232fe7fcf3a88b2633c392061fd60cee5fe5718622bc\"" May 13 00:20:46.081669 kubelet[1822]: E0513 00:20:46.081648 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:46.084407 env[1314]: time="2025-05-13T00:20:46.084355286Z" level=info msg="CreateContainer within sandbox \"d0ecc66a472f7285da05232fe7fcf3a88b2633c392061fd60cee5fe5718622bc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:20:46.086432 env[1314]: time="2025-05-13T00:20:46.086355823Z" level=info msg="CreateContainer within sandbox \"940ca191f217c5e8ae12975291fd9b458c332b8cab877bddb37e1b87a1d1d65d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d6dd1e316e5ebaaee154b12fcf1bc4250706109ddeb2afb1ddb0354203531f42\"" May 13 00:20:46.086969 env[1314]: time="2025-05-13T00:20:46.086928028Z" level=info msg="StartContainer for \"d6dd1e316e5ebaaee154b12fcf1bc4250706109ddeb2afb1ddb0354203531f42\"" May 13 00:20:46.092149 env[1314]: time="2025-05-13T00:20:46.092103424Z" level=info msg="CreateContainer within sandbox \"a92a8514bbefbe65eceed5ea93c355ae8b705d31a905448087b6f9258a64d497\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f906f5eeea7fa482de76219c84869bf86d696658e9880c46863fc382e4e5638b\"" May 13 00:20:46.092707 env[1314]: time="2025-05-13T00:20:46.092680867Z" level=info msg="StartContainer for \"f906f5eeea7fa482de76219c84869bf86d696658e9880c46863fc382e4e5638b\"" May 13 00:20:46.106341 env[1314]: time="2025-05-13T00:20:46.106297893Z" level=info msg="CreateContainer within sandbox \"d0ecc66a472f7285da05232fe7fcf3a88b2633c392061fd60cee5fe5718622bc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ae7b0ecb81c9bcd93edf149955b3c52b3bd38bc5addbcfb51a75df8a81272958\"" May 13 00:20:46.106703 env[1314]: time="2025-05-13T00:20:46.106678829Z" level=info msg="StartContainer for \"ae7b0ecb81c9bcd93edf149955b3c52b3bd38bc5addbcfb51a75df8a81272958\"" May 13 00:20:46.119733 kubelet[1822]: W0513 00:20:46.117360 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:46.119733 kubelet[1822]: E0513 00:20:46.117447 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:46.179273 env[1314]: time="2025-05-13T00:20:46.179224108Z" level=info msg="StartContainer for \"d6dd1e316e5ebaaee154b12fcf1bc4250706109ddeb2afb1ddb0354203531f42\" returns successfully" May 13 00:20:46.189066 kubelet[1822]: W0513 00:20:46.189009 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:46.189178 kubelet[1822]: E0513 00:20:46.189076 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:46.207362 env[1314]: time="2025-05-13T00:20:46.207261461Z" level=info msg="StartContainer for \"f906f5eeea7fa482de76219c84869bf86d696658e9880c46863fc382e4e5638b\" returns successfully" May 13 00:20:46.241461 env[1314]: time="2025-05-13T00:20:46.239034921Z" level=info msg="StartContainer for \"ae7b0ecb81c9bcd93edf149955b3c52b3bd38bc5addbcfb51a75df8a81272958\" returns successfully" May 13 00:20:46.274256 kubelet[1822]: E0513 00:20:46.271986 1822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.41:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.41:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee3b8d0b1685 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:20:45.003126405 +0000 UTC m=+1.208552369,LastTimestamp:2025-05-13 00:20:45.003126405 +0000 UTC m=+1.208552369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:20:46.284640 kubelet[1822]: W0513 00:20:46.281422 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:46.284640 kubelet[1822]: E0513 00:20:46.281493 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 13 00:20:46.415781 kubelet[1822]: E0513 00:20:46.415724 1822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="1.6s" May 13 00:20:46.519192 kubelet[1822]: I0513 00:20:46.518396 1822 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:20:47.041133 kubelet[1822]: E0513 00:20:47.041098 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:47.043255 kubelet[1822]: E0513 00:20:47.043229 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:47.044850 kubelet[1822]: E0513 00:20:47.044824 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:47.930872 kubelet[1822]: I0513 00:20:47.930824 1822 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:20:47.999103 kubelet[1822]: I0513 00:20:47.999074 1822 apiserver.go:52] "Watching apiserver" May 13 00:20:48.012882 kubelet[1822]: I0513 00:20:48.012850 1822 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:20:48.052600 kubelet[1822]: E0513 00:20:48.052552 1822 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:20:48.053069 kubelet[1822]: E0513 00:20:48.053052 1822 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:50.262651 systemd[1]: Reloading. May 13 00:20:50.327497 /usr/lib/systemd/system-generators/torcx-generator[2118]: time="2025-05-13T00:20:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:20:50.327526 /usr/lib/systemd/system-generators/torcx-generator[2118]: time="2025-05-13T00:20:50Z" level=info msg="torcx already run" May 13 00:20:50.393334 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:20:50.393491 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:20:50.409781 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:20:50.480266 systemd[1]: Stopping kubelet.service... May 13 00:20:50.495768 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:20:50.496048 systemd[1]: Stopped kubelet.service. May 13 00:20:50.497735 systemd[1]: Starting kubelet.service... May 13 00:20:50.577931 systemd[1]: Started kubelet.service. May 13 00:20:50.617785 kubelet[2169]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:20:50.617785 kubelet[2169]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:20:50.617785 kubelet[2169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:20:50.618189 kubelet[2169]: I0513 00:20:50.617833 2169 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:20:50.622314 kubelet[2169]: I0513 00:20:50.622279 2169 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:20:50.622314 kubelet[2169]: I0513 00:20:50.622308 2169 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:20:50.622544 kubelet[2169]: I0513 00:20:50.622526 2169 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:20:50.623875 kubelet[2169]: I0513 00:20:50.623849 2169 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:20:50.625015 kubelet[2169]: I0513 00:20:50.624994 2169 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:20:50.633616 kubelet[2169]: I0513 00:20:50.633589 2169 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:20:50.634211 kubelet[2169]: I0513 00:20:50.634181 2169 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:20:50.634495 kubelet[2169]: I0513 00:20:50.634308 2169 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:20:50.634644 kubelet[2169]: I0513 00:20:50.634629 2169 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:20:50.634702 kubelet[2169]: I0513 00:20:50.634693 2169 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:20:50.634799 kubelet[2169]: I0513 00:20:50.634788 2169 state_mem.go:36] "Initialized new in-memory state store" May 13 00:20:50.634967 kubelet[2169]: I0513 00:20:50.634955 2169 kubelet.go:400] "Attempting to sync node with API server" May 13 00:20:50.635140 kubelet[2169]: I0513 00:20:50.635118 2169 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:20:50.635233 kubelet[2169]: I0513 00:20:50.635222 2169 kubelet.go:312] "Adding apiserver pod source" May 13 00:20:50.635309 kubelet[2169]: I0513 00:20:50.635298 2169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:20:50.640274 kubelet[2169]: I0513 00:20:50.635970 2169 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:20:50.640274 kubelet[2169]: I0513 00:20:50.636135 2169 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:20:50.640274 kubelet[2169]: I0513 00:20:50.636534 2169 server.go:1264] "Started kubelet" May 13 00:20:50.640274 kubelet[2169]: I0513 00:20:50.639403 2169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:20:50.641478 kubelet[2169]: I0513 00:20:50.641451 2169 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:20:50.642423 kubelet[2169]: I0513 00:20:50.642394 2169 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:20:50.642678 kubelet[2169]: I0513 00:20:50.642659 2169 reconciler.go:26] "Reconciler: start to sync state" May 13 00:20:50.650354 kubelet[2169]: I0513 00:20:50.650311 2169 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:20:50.652264 kubelet[2169]: I0513 00:20:50.651758 2169 server.go:455] "Adding debug handlers to kubelet server" May 13 00:20:50.653183 kubelet[2169]: I0513 00:20:50.653088 2169 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:20:50.653591 kubelet[2169]: I0513 00:20:50.653552 2169 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:20:50.664521 kubelet[2169]: I0513 00:20:50.664481 2169 factory.go:221] Registration of the systemd container factory successfully May 13 00:20:50.665845 kubelet[2169]: I0513 00:20:50.665467 2169 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:20:50.665845 kubelet[2169]: E0513 00:20:50.665652 2169 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:20:50.669468 kubelet[2169]: I0513 00:20:50.667237 2169 factory.go:221] Registration of the containerd container factory successfully May 13 00:20:50.682733 kubelet[2169]: I0513 00:20:50.682676 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:20:50.684776 kubelet[2169]: I0513 00:20:50.684625 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:20:50.684776 kubelet[2169]: I0513 00:20:50.684688 2169 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:20:50.684776 kubelet[2169]: I0513 00:20:50.684709 2169 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:20:50.684897 kubelet[2169]: E0513 00:20:50.684794 2169 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:20:50.717958 kubelet[2169]: I0513 00:20:50.717926 2169 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:20:50.717958 kubelet[2169]: I0513 00:20:50.717948 2169 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:20:50.717958 kubelet[2169]: I0513 00:20:50.717969 2169 state_mem.go:36] "Initialized new in-memory state store" May 13 00:20:50.718144 kubelet[2169]: I0513 00:20:50.718111 2169 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:20:50.718144 kubelet[2169]: I0513 00:20:50.718132 2169 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:20:50.718195 kubelet[2169]: I0513 00:20:50.718151 2169 policy_none.go:49] "None policy: Start" May 13 00:20:50.718880 kubelet[2169]: I0513 00:20:50.718861 2169 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:20:50.718932 kubelet[2169]: I0513 00:20:50.718885 2169 state_mem.go:35] "Initializing new in-memory state store" May 13 00:20:50.719067 kubelet[2169]: I0513 00:20:50.719050 2169 state_mem.go:75] "Updated machine memory state" May 13 00:20:50.720309 kubelet[2169]: I0513 00:20:50.720287 2169 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:20:50.720544 kubelet[2169]: I0513 00:20:50.720497 2169 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:20:50.720630 kubelet[2169]: I0513 00:20:50.720614 2169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:20:50.744674 kubelet[2169]: I0513 00:20:50.744649 2169 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:20:50.751644 kubelet[2169]: I0513 00:20:50.751613 2169 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:20:50.751764 kubelet[2169]: I0513 00:20:50.751694 2169 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:20:50.785936 kubelet[2169]: I0513 00:20:50.785894 2169 topology_manager.go:215] "Topology Admit Handler" podUID="aaedb0f7df1ff28abaadd7c002b516db" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:20:50.786068 kubelet[2169]: I0513 00:20:50.786014 2169 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:20:50.786068 kubelet[2169]: I0513 00:20:50.786050 2169 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:20:50.945276 kubelet[2169]: I0513 00:20:50.943918 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:20:50.945276 kubelet[2169]: I0513 00:20:50.943964 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaedb0f7df1ff28abaadd7c002b516db-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaedb0f7df1ff28abaadd7c002b516db\") " pod="kube-system/kube-apiserver-localhost" May 13 00:20:50.945276 kubelet[2169]: I0513 00:20:50.943986 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaedb0f7df1ff28abaadd7c002b516db-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aaedb0f7df1ff28abaadd7c002b516db\") " pod="kube-system/kube-apiserver-localhost" May 13 00:20:50.945276 kubelet[2169]: I0513 00:20:50.944012 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:20:50.945276 kubelet[2169]: I0513 00:20:50.944030 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:20:50.945572 kubelet[2169]: I0513 00:20:50.944051 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:20:50.945572 kubelet[2169]: I0513 00:20:50.944066 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaedb0f7df1ff28abaadd7c002b516db-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaedb0f7df1ff28abaadd7c002b516db\") " pod="kube-system/kube-apiserver-localhost" May 13 00:20:50.945572 kubelet[2169]: I0513 00:20:50.944118 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:20:50.945572 kubelet[2169]: I0513 00:20:50.944166 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:20:51.092457 kubelet[2169]: E0513 00:20:51.092424 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:51.093075 kubelet[2169]: E0513 00:20:51.093044 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:51.093561 kubelet[2169]: E0513 00:20:51.093541 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:51.317719 sudo[2204]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:20:51.317939 sudo[2204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 13 00:20:51.635848 kubelet[2169]: I0513 00:20:51.635745 2169 apiserver.go:52] "Watching apiserver" May 13 00:20:51.642928 kubelet[2169]: I0513 00:20:51.642889 2169 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:20:51.694109 kubelet[2169]: E0513 00:20:51.694072 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:51.694869 kubelet[2169]: E0513 00:20:51.694842 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:51.702680 kubelet[2169]: E0513 00:20:51.702612 2169 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:20:51.703163 kubelet[2169]: E0513 00:20:51.703138 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:51.721241 kubelet[2169]: I0513 00:20:51.721172 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.721157483 podStartE2EDuration="1.721157483s" podCreationTimestamp="2025-05-13 00:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:20:51.721044099 +0000 UTC m=+1.138690077" watchObservedRunningTime="2025-05-13 00:20:51.721157483 +0000 UTC m=+1.138803461" May 13 00:20:51.737054 kubelet[2169]: I0513 00:20:51.736977 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7369490459999999 podStartE2EDuration="1.736949046s" podCreationTimestamp="2025-05-13 00:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:20:51.728740588 +0000 UTC m=+1.146386566" watchObservedRunningTime="2025-05-13 00:20:51.736949046 +0000 UTC m=+1.154595024" May 13 00:20:51.745283 kubelet[2169]: I0513 00:20:51.745091 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7450736359999999 podStartE2EDuration="1.745073636s" podCreationTimestamp="2025-05-13 00:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:20:51.737275241 +0000 UTC m=+1.154921219" watchObservedRunningTime="2025-05-13 00:20:51.745073636 +0000 UTC m=+1.162719613" May 13 00:20:51.804680 sudo[2204]: pam_unix(sudo:session): session closed for user root May 13 00:20:52.695736 kubelet[2169]: E0513 00:20:52.695695 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:53.696989 kubelet[2169]: E0513 00:20:53.696942 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:53.957583 kubelet[2169]: E0513 00:20:53.957427 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:54.685253 sudo[1439]: pam_unix(sudo:session): session closed for user root May 13 00:20:54.687423 sshd[1433]: pam_unix(sshd:session): session closed for user core May 13 00:20:54.690032 systemd-logind[1299]: Session 5 logged out. Waiting for processes to exit. May 13 00:20:54.690153 systemd[1]: sshd@4-10.0.0.41:22-10.0.0.1:43968.service: Deactivated successfully. May 13 00:20:54.690917 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:20:54.691337 systemd-logind[1299]: Removed session 5. May 13 00:20:54.697832 kubelet[2169]: E0513 00:20:54.697801 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:58.430038 kubelet[2169]: E0513 00:20:58.429999 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:20:58.701721 kubelet[2169]: E0513 00:20:58.701463 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:03.964578 kubelet[2169]: E0513 00:21:03.964538 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:04.175420 kubelet[2169]: E0513 00:21:04.175392 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:04.711478 kubelet[2169]: E0513 00:21:04.711426 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:04.993905 update_engine[1305]: I0513 00:21:04.993791 1305 update_attempter.cc:509] Updating boot flags... May 13 00:21:05.556162 kubelet[2169]: I0513 00:21:05.556118 2169 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:21:05.556846 env[1314]: time="2025-05-13T00:21:05.556766729Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:21:05.557827 kubelet[2169]: I0513 00:21:05.557208 2169 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:21:05.758397 kubelet[2169]: I0513 00:21:05.758334 2169 topology_manager.go:215] "Topology Admit Handler" podUID="cee64aae-ea03-4a54-9153-eadc0c260a84" podNamespace="kube-system" podName="kube-proxy-n8bds" May 13 00:21:05.763359 kubelet[2169]: I0513 00:21:05.763324 2169 topology_manager.go:215] "Topology Admit Handler" podUID="ff2c8254-a99f-4511-8494-ecb1d0d05676" podNamespace="kube-system" podName="cilium-xlvfv" May 13 00:21:05.852020 kubelet[2169]: I0513 00:21:05.851902 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-etc-cni-netd\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852020 kubelet[2169]: I0513 00:21:05.851961 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff2c8254-a99f-4511-8494-ecb1d0d05676-hubble-tls\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852020 kubelet[2169]: I0513 00:21:05.852009 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cee64aae-ea03-4a54-9153-eadc0c260a84-xtables-lock\") pod \"kube-proxy-n8bds\" (UID: \"cee64aae-ea03-4a54-9153-eadc0c260a84\") " pod="kube-system/kube-proxy-n8bds" May 13 00:21:05.852191 kubelet[2169]: I0513 00:21:05.852031 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cee64aae-ea03-4a54-9153-eadc0c260a84-lib-modules\") pod \"kube-proxy-n8bds\" (UID: \"cee64aae-ea03-4a54-9153-eadc0c260a84\") " pod="kube-system/kube-proxy-n8bds" May 13 00:21:05.852191 kubelet[2169]: I0513 00:21:05.852051 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pmk4\" (UniqueName: \"kubernetes.io/projected/cee64aae-ea03-4a54-9153-eadc0c260a84-kube-api-access-8pmk4\") pod \"kube-proxy-n8bds\" (UID: \"cee64aae-ea03-4a54-9153-eadc0c260a84\") " pod="kube-system/kube-proxy-n8bds" May 13 00:21:05.852191 kubelet[2169]: I0513 00:21:05.852080 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cee64aae-ea03-4a54-9153-eadc0c260a84-kube-proxy\") pod \"kube-proxy-n8bds\" (UID: \"cee64aae-ea03-4a54-9153-eadc0c260a84\") " pod="kube-system/kube-proxy-n8bds" May 13 00:21:05.852191 kubelet[2169]: I0513 00:21:05.852095 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-xtables-lock\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852191 kubelet[2169]: I0513 00:21:05.852110 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-config-path\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852322 kubelet[2169]: I0513 00:21:05.852126 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-host-proc-sys-net\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852322 kubelet[2169]: I0513 00:21:05.852150 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrtxb\" (UniqueName: \"kubernetes.io/projected/ff2c8254-a99f-4511-8494-ecb1d0d05676-kube-api-access-hrtxb\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852322 kubelet[2169]: I0513 00:21:05.852167 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-run\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852322 kubelet[2169]: I0513 00:21:05.852183 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cni-path\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852322 kubelet[2169]: I0513 00:21:05.852199 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-lib-modules\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852322 kubelet[2169]: I0513 00:21:05.852223 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff2c8254-a99f-4511-8494-ecb1d0d05676-clustermesh-secrets\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852498 kubelet[2169]: I0513 00:21:05.852239 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-host-proc-sys-kernel\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852498 kubelet[2169]: I0513 00:21:05.852255 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-hostproc\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852498 kubelet[2169]: I0513 00:21:05.852281 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-bpf-maps\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:05.852498 kubelet[2169]: I0513 00:21:05.852306 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-cgroup\") pod \"cilium-xlvfv\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " pod="kube-system/cilium-xlvfv" May 13 00:21:06.062345 kubelet[2169]: E0513 00:21:06.062297 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:06.062951 env[1314]: time="2025-05-13T00:21:06.062896874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8bds,Uid:cee64aae-ea03-4a54-9153-eadc0c260a84,Namespace:kube-system,Attempt:0,}" May 13 00:21:06.067101 kubelet[2169]: E0513 00:21:06.066995 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:06.068807 env[1314]: time="2025-05-13T00:21:06.068551743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xlvfv,Uid:ff2c8254-a99f-4511-8494-ecb1d0d05676,Namespace:kube-system,Attempt:0,}" May 13 00:21:06.084177 env[1314]: time="2025-05-13T00:21:06.084089775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:06.084462 env[1314]: time="2025-05-13T00:21:06.084430649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:06.084537 env[1314]: time="2025-05-13T00:21:06.084488209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:06.084583 env[1314]: time="2025-05-13T00:21:06.084527328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:06.084583 env[1314]: time="2025-05-13T00:21:06.084537688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:06.084712 env[1314]: time="2025-05-13T00:21:06.084685605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:06.084768 env[1314]: time="2025-05-13T00:21:06.084698445Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666 pid=2286 runtime=io.containerd.runc.v2 May 13 00:21:06.085071 env[1314]: time="2025-05-13T00:21:06.085039160Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b314d43b8398474007b658abab16b10845dca33ecd0f546a455e17cb66ee69e2 pid=2284 runtime=io.containerd.runc.v2 May 13 00:21:06.153538 env[1314]: time="2025-05-13T00:21:06.153423147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xlvfv,Uid:ff2c8254-a99f-4511-8494-ecb1d0d05676,Namespace:kube-system,Attempt:0,} returns sandbox id \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\"" May 13 00:21:06.155077 kubelet[2169]: E0513 00:21:06.155049 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:06.157281 env[1314]: time="2025-05-13T00:21:06.157236126Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:21:06.158133 env[1314]: time="2025-05-13T00:21:06.158062793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8bds,Uid:cee64aae-ea03-4a54-9153-eadc0c260a84,Namespace:kube-system,Attempt:0,} returns sandbox id \"b314d43b8398474007b658abab16b10845dca33ecd0f546a455e17cb66ee69e2\"" May 13 00:21:06.159071 kubelet[2169]: E0513 00:21:06.159046 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:06.162509 env[1314]: time="2025-05-13T00:21:06.162472442Z" level=info msg="CreateContainer within sandbox \"b314d43b8398474007b658abab16b10845dca33ecd0f546a455e17cb66ee69e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:21:06.177467 env[1314]: time="2025-05-13T00:21:06.177420723Z" level=info msg="CreateContainer within sandbox \"b314d43b8398474007b658abab16b10845dca33ecd0f546a455e17cb66ee69e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d337300d6a09763e5471ef256010f674275612193f9863a177eca95fc349d24\"" May 13 00:21:06.179416 env[1314]: time="2025-05-13T00:21:06.178342869Z" level=info msg="StartContainer for \"6d337300d6a09763e5471ef256010f674275612193f9863a177eca95fc349d24\"" May 13 00:21:06.248266 env[1314]: time="2025-05-13T00:21:06.248220712Z" level=info msg="StartContainer for \"6d337300d6a09763e5471ef256010f674275612193f9863a177eca95fc349d24\" returns successfully" May 13 00:21:06.355629 kubelet[2169]: I0513 00:21:06.350172 2169 topology_manager.go:215] "Topology Admit Handler" podUID="6864ada0-620f-45a3-b2fd-26f713126f11" podNamespace="kube-system" podName="cilium-operator-599987898-8sz7s" May 13 00:21:06.457280 kubelet[2169]: I0513 00:21:06.457158 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcv2m\" (UniqueName: \"kubernetes.io/projected/6864ada0-620f-45a3-b2fd-26f713126f11-kube-api-access-bcv2m\") pod \"cilium-operator-599987898-8sz7s\" (UID: \"6864ada0-620f-45a3-b2fd-26f713126f11\") " pod="kube-system/cilium-operator-599987898-8sz7s" May 13 00:21:06.457280 kubelet[2169]: I0513 00:21:06.457210 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6864ada0-620f-45a3-b2fd-26f713126f11-cilium-config-path\") pod \"cilium-operator-599987898-8sz7s\" (UID: \"6864ada0-620f-45a3-b2fd-26f713126f11\") " pod="kube-system/cilium-operator-599987898-8sz7s" May 13 00:21:06.653386 kubelet[2169]: E0513 00:21:06.653315 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:06.653834 env[1314]: time="2025-05-13T00:21:06.653788550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8sz7s,Uid:6864ada0-620f-45a3-b2fd-26f713126f11,Namespace:kube-system,Attempt:0,}" May 13 00:21:06.667678 env[1314]: time="2025-05-13T00:21:06.667612209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:06.667678 env[1314]: time="2025-05-13T00:21:06.667652449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:06.667907 env[1314]: time="2025-05-13T00:21:06.667867445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:06.668148 env[1314]: time="2025-05-13T00:21:06.668109481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cde5a017cbf6264d90475aa8a0d1f036a4104ecbc171ce1ffa5599e3a6ec86d1 pid=2512 runtime=io.containerd.runc.v2 May 13 00:21:06.710914 env[1314]: time="2025-05-13T00:21:06.710820519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8sz7s,Uid:6864ada0-620f-45a3-b2fd-26f713126f11,Namespace:kube-system,Attempt:0,} returns sandbox id \"cde5a017cbf6264d90475aa8a0d1f036a4104ecbc171ce1ffa5599e3a6ec86d1\"" May 13 00:21:06.712201 kubelet[2169]: E0513 00:21:06.712174 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:06.716469 kubelet[2169]: E0513 00:21:06.715869 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:06.724283 kubelet[2169]: I0513 00:21:06.724211 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n8bds" podStartSLOduration=1.724197105 podStartE2EDuration="1.724197105s" podCreationTimestamp="2025-05-13 00:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:06.723583195 +0000 UTC m=+16.141229173" watchObservedRunningTime="2025-05-13 00:21:06.724197105 +0000 UTC m=+16.141843043" May 13 00:21:11.861457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536631007.mount: Deactivated successfully. May 13 00:21:14.090495 env[1314]: time="2025-05-13T00:21:14.090444973Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:21:14.091706 env[1314]: time="2025-05-13T00:21:14.091676239Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:21:14.093116 env[1314]: time="2025-05-13T00:21:14.093091224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:21:14.093711 env[1314]: time="2025-05-13T00:21:14.093680017Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 00:21:14.100678 env[1314]: time="2025-05-13T00:21:14.100643741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:21:14.103615 env[1314]: time="2025-05-13T00:21:14.101981846Z" level=info msg="CreateContainer within sandbox \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:21:14.111337 env[1314]: time="2025-05-13T00:21:14.111293184Z" level=info msg="CreateContainer within sandbox \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17\"" May 13 00:21:14.113209 env[1314]: time="2025-05-13T00:21:14.112352052Z" level=info msg="StartContainer for \"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17\"" May 13 00:21:14.228890 env[1314]: time="2025-05-13T00:21:14.228841333Z" level=info msg="StartContainer for \"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17\" returns successfully" May 13 00:21:14.244319 env[1314]: time="2025-05-13T00:21:14.244273364Z" level=info msg="shim disconnected" id=dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17 May 13 00:21:14.244498 env[1314]: time="2025-05-13T00:21:14.244322923Z" level=warning msg="cleaning up after shim disconnected" id=dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17 namespace=k8s.io May 13 00:21:14.244498 env[1314]: time="2025-05-13T00:21:14.244333923Z" level=info msg="cleaning up dead shim" May 13 00:21:14.250639 env[1314]: time="2025-05-13T00:21:14.250599974Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:21:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2597 runtime=io.containerd.runc.v2\n" May 13 00:21:14.732801 kubelet[2169]: E0513 00:21:14.732768 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:14.737190 env[1314]: time="2025-05-13T00:21:14.736886073Z" level=info msg="CreateContainer within sandbox \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:21:14.760519 env[1314]: time="2025-05-13T00:21:14.760473934Z" level=info msg="CreateContainer within sandbox \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f\"" May 13 00:21:14.761220 env[1314]: time="2025-05-13T00:21:14.761181407Z" level=info msg="StartContainer for \"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f\"" May 13 00:21:14.811473 env[1314]: time="2025-05-13T00:21:14.811421695Z" level=info msg="StartContainer for \"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f\" returns successfully" May 13 00:21:14.818157 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:21:14.818459 systemd[1]: Stopped systemd-sysctl.service. May 13 00:21:14.818674 systemd[1]: Stopping systemd-sysctl.service... May 13 00:21:14.820224 systemd[1]: Starting systemd-sysctl.service... May 13 00:21:14.830599 systemd[1]: Finished systemd-sysctl.service. May 13 00:21:14.842737 env[1314]: time="2025-05-13T00:21:14.842690271Z" level=info msg="shim disconnected" id=0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f May 13 00:21:14.842998 env[1314]: time="2025-05-13T00:21:14.842978188Z" level=warning msg="cleaning up after shim disconnected" id=0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f namespace=k8s.io May 13 00:21:14.843080 env[1314]: time="2025-05-13T00:21:14.843067347Z" level=info msg="cleaning up dead shim" May 13 00:21:14.850939 env[1314]: time="2025-05-13T00:21:14.850896661Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:21:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2661 runtime=io.containerd.runc.v2\n" May 13 00:21:15.108457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17-rootfs.mount: Deactivated successfully. May 13 00:21:15.260718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1405953961.mount: Deactivated successfully. May 13 00:21:15.734915 kubelet[2169]: E0513 00:21:15.734878 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:15.738916 env[1314]: time="2025-05-13T00:21:15.738437895Z" level=info msg="CreateContainer within sandbox \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:21:15.760272 env[1314]: time="2025-05-13T00:21:15.760203386Z" level=info msg="CreateContainer within sandbox \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6\"" May 13 00:21:15.760843 env[1314]: time="2025-05-13T00:21:15.760714180Z" level=info msg="StartContainer for \"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6\"" May 13 00:21:15.849595 env[1314]: time="2025-05-13T00:21:15.847018432Z" level=info msg="StartContainer for \"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6\" returns successfully" May 13 00:21:15.896079 env[1314]: time="2025-05-13T00:21:15.896032957Z" level=info msg="shim disconnected" id=06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6 May 13 00:21:15.896389 env[1314]: time="2025-05-13T00:21:15.896346073Z" level=warning msg="cleaning up after shim disconnected" id=06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6 namespace=k8s.io May 13 00:21:15.896470 env[1314]: time="2025-05-13T00:21:15.896455952Z" level=info msg="cleaning up dead shim" May 13 00:21:15.904199 env[1314]: time="2025-05-13T00:21:15.904165671Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:21:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2716 runtime=io.containerd.runc.v2\n" May 13 00:21:16.154667 env[1314]: time="2025-05-13T00:21:16.154611623Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:21:16.156047 env[1314]: time="2025-05-13T00:21:16.156014009Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:21:16.157360 env[1314]: time="2025-05-13T00:21:16.157333516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:21:16.157934 env[1314]: time="2025-05-13T00:21:16.157900790Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 00:21:16.161359 env[1314]: time="2025-05-13T00:21:16.161312396Z" level=info msg="CreateContainer within sandbox \"cde5a017cbf6264d90475aa8a0d1f036a4104ecbc171ce1ffa5599e3a6ec86d1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:21:16.171815 env[1314]: time="2025-05-13T00:21:16.171763010Z" level=info msg="CreateContainer within sandbox \"cde5a017cbf6264d90475aa8a0d1f036a4104ecbc171ce1ffa5599e3a6ec86d1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\"" May 13 00:21:16.173654 env[1314]: time="2025-05-13T00:21:16.172470643Z" level=info msg="StartContainer for \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\"" May 13 00:21:16.239775 env[1314]: time="2025-05-13T00:21:16.239714485Z" level=info msg="StartContainer for \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\" returns successfully" May 13 00:21:16.738379 kubelet[2169]: E0513 00:21:16.738327 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:16.741093 kubelet[2169]: E0513 00:21:16.741050 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:16.743396 env[1314]: time="2025-05-13T00:21:16.743336565Z" level=info msg="CreateContainer within sandbox \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:21:16.818105 kubelet[2169]: I0513 00:21:16.818047 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-8sz7s" podStartSLOduration=1.372191076 podStartE2EDuration="10.818029052s" podCreationTimestamp="2025-05-13 00:21:06 +0000 UTC" firstStartedPulling="2025-05-13 00:21:06.712884086 +0000 UTC m=+16.130530064" lastFinishedPulling="2025-05-13 00:21:16.158722062 +0000 UTC m=+25.576368040" observedRunningTime="2025-05-13 00:21:16.816405228 +0000 UTC m=+26.234051206" watchObservedRunningTime="2025-05-13 00:21:16.818029052 +0000 UTC m=+26.235675030" May 13 00:21:16.841964 env[1314]: time="2025-05-13T00:21:16.841902371Z" level=info msg="CreateContainer within sandbox \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3\"" May 13 00:21:16.842468 env[1314]: time="2025-05-13T00:21:16.842432325Z" level=info msg="StartContainer for \"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3\"" May 13 00:21:16.956971 env[1314]: time="2025-05-13T00:21:16.956924411Z" level=info msg="StartContainer for \"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3\" returns successfully" May 13 00:21:16.979105 env[1314]: time="2025-05-13T00:21:16.979057547Z" level=info msg="shim disconnected" id=95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3 May 13 00:21:16.979105 env[1314]: time="2025-05-13T00:21:16.979107947Z" level=warning msg="cleaning up after shim disconnected" id=95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3 namespace=k8s.io May 13 00:21:16.979424 env[1314]: time="2025-05-13T00:21:16.979118787Z" level=info msg="cleaning up dead shim" May 13 00:21:16.991835 env[1314]: time="2025-05-13T00:21:16.991703660Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:21:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2810 runtime=io.containerd.runc.v2\n" May 13 00:21:17.744996 kubelet[2169]: E0513 00:21:17.744947 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:17.745460 kubelet[2169]: E0513 00:21:17.745316 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:17.747672 env[1314]: time="2025-05-13T00:21:17.747626858Z" level=info msg="CreateContainer within sandbox \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:21:17.762665 env[1314]: time="2025-05-13T00:21:17.761633963Z" level=info msg="CreateContainer within sandbox \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\"" May 13 00:21:17.763280 env[1314]: time="2025-05-13T00:21:17.763231107Z" level=info msg="StartContainer for \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\"" May 13 00:21:17.856840 env[1314]: time="2025-05-13T00:21:17.856793962Z" level=info msg="StartContainer for \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\" returns successfully" May 13 00:21:18.016155 kubelet[2169]: I0513 00:21:18.015269 2169 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:21:18.106052 kubelet[2169]: I0513 00:21:18.106019 2169 topology_manager.go:215] "Topology Admit Handler" podUID="20d220dc-db8c-4a1e-b56c-13a7a40a2461" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pq5g5" May 13 00:21:18.106525 kubelet[2169]: I0513 00:21:18.106502 2169 topology_manager.go:215] "Topology Admit Handler" podUID="e9b4fa2a-016c-46cc-9f6a-60eeda8a8675" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8mpgc" May 13 00:21:18.109127 systemd[1]: run-containerd-runc-k8s.io-3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913-runc.U9yzhE.mount: Deactivated successfully. May 13 00:21:18.144613 kubelet[2169]: I0513 00:21:18.144568 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20d220dc-db8c-4a1e-b56c-13a7a40a2461-config-volume\") pod \"coredns-7db6d8ff4d-pq5g5\" (UID: \"20d220dc-db8c-4a1e-b56c-13a7a40a2461\") " pod="kube-system/coredns-7db6d8ff4d-pq5g5" May 13 00:21:18.144613 kubelet[2169]: I0513 00:21:18.144609 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fkwd\" (UniqueName: \"kubernetes.io/projected/20d220dc-db8c-4a1e-b56c-13a7a40a2461-kube-api-access-6fkwd\") pod \"coredns-7db6d8ff4d-pq5g5\" (UID: \"20d220dc-db8c-4a1e-b56c-13a7a40a2461\") " pod="kube-system/coredns-7db6d8ff4d-pq5g5" May 13 00:21:18.144799 kubelet[2169]: I0513 00:21:18.144633 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9b4fa2a-016c-46cc-9f6a-60eeda8a8675-config-volume\") pod \"coredns-7db6d8ff4d-8mpgc\" (UID: \"e9b4fa2a-016c-46cc-9f6a-60eeda8a8675\") " pod="kube-system/coredns-7db6d8ff4d-8mpgc" May 13 00:21:18.144799 kubelet[2169]: I0513 00:21:18.144651 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2g42\" (UniqueName: \"kubernetes.io/projected/e9b4fa2a-016c-46cc-9f6a-60eeda8a8675-kube-api-access-g2g42\") pod \"coredns-7db6d8ff4d-8mpgc\" (UID: \"e9b4fa2a-016c-46cc-9f6a-60eeda8a8675\") " pod="kube-system/coredns-7db6d8ff4d-8mpgc" May 13 00:21:18.237400 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 13 00:21:18.412849 kubelet[2169]: E0513 00:21:18.412808 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:18.413083 kubelet[2169]: E0513 00:21:18.413015 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:18.413870 env[1314]: time="2025-05-13T00:21:18.413831607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8mpgc,Uid:e9b4fa2a-016c-46cc-9f6a-60eeda8a8675,Namespace:kube-system,Attempt:0,}" May 13 00:21:18.413965 env[1314]: time="2025-05-13T00:21:18.413890886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pq5g5,Uid:20d220dc-db8c-4a1e-b56c-13a7a40a2461,Namespace:kube-system,Attempt:0,}" May 13 00:21:18.438138 systemd[1]: Started sshd@5-10.0.0.41:22-10.0.0.1:35972.service. May 13 00:21:18.488397 sshd[2919]: Accepted publickey for core from 10.0.0.1 port 35972 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:18.489002 sshd[2919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:18.493963 systemd-logind[1299]: New session 6 of user core. May 13 00:21:18.494591 systemd[1]: Started session-6.scope. May 13 00:21:18.585401 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 13 00:21:18.635408 sshd[2919]: pam_unix(sshd:session): session closed for user core May 13 00:21:18.638009 systemd[1]: sshd@5-10.0.0.41:22-10.0.0.1:35972.service: Deactivated successfully. May 13 00:21:18.638969 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:21:18.638977 systemd-logind[1299]: Session 6 logged out. Waiting for processes to exit. May 13 00:21:18.639965 systemd-logind[1299]: Removed session 6. May 13 00:21:18.749308 kubelet[2169]: E0513 00:21:18.749205 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:18.765288 kubelet[2169]: I0513 00:21:18.765221 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xlvfv" podStartSLOduration=5.82240056 podStartE2EDuration="13.765206459s" podCreationTimestamp="2025-05-13 00:21:05 +0000 UTC" firstStartedPulling="2025-05-13 00:21:06.156661535 +0000 UTC m=+15.574307473" lastFinishedPulling="2025-05-13 00:21:14.099467394 +0000 UTC m=+23.517113372" observedRunningTime="2025-05-13 00:21:18.764648704 +0000 UTC m=+28.182294682" watchObservedRunningTime="2025-05-13 00:21:18.765206459 +0000 UTC m=+28.182852437" May 13 00:21:19.751577 kubelet[2169]: E0513 00:21:19.751535 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:20.210617 systemd-networkd[1091]: cilium_host: Link UP May 13 00:21:20.211557 systemd-networkd[1091]: cilium_net: Link UP May 13 00:21:20.213782 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 00:21:20.213855 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 00:21:20.216361 systemd-networkd[1091]: cilium_net: Gained carrier May 13 00:21:20.216574 systemd-networkd[1091]: cilium_host: Gained carrier May 13 00:21:20.216673 systemd-networkd[1091]: cilium_net: Gained IPv6LL May 13 00:21:20.216784 systemd-networkd[1091]: cilium_host: Gained IPv6LL May 13 00:21:20.322404 systemd-networkd[1091]: cilium_vxlan: Link UP May 13 00:21:20.322410 systemd-networkd[1091]: cilium_vxlan: Gained carrier May 13 00:21:20.646399 kernel: NET: Registered PF_ALG protocol family May 13 00:21:20.753348 kubelet[2169]: E0513 00:21:20.753292 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:21.260926 systemd-networkd[1091]: lxc_health: Link UP May 13 00:21:21.271587 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:21:21.271408 systemd-networkd[1091]: lxc_health: Gained carrier May 13 00:21:21.521464 systemd-networkd[1091]: lxc411ade3f5f66: Link UP May 13 00:21:21.530255 systemd-networkd[1091]: lxc1f72968f1cf2: Link UP May 13 00:21:21.539430 kernel: eth0: renamed from tmpb4160 May 13 00:21:21.549839 kernel: eth0: renamed from tmpa6db7 May 13 00:21:21.556396 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc411ade3f5f66: link becomes ready May 13 00:21:21.558340 systemd-networkd[1091]: lxc411ade3f5f66: Gained carrier May 13 00:21:21.559072 systemd-networkd[1091]: lxc1f72968f1cf2: Gained carrier May 13 00:21:21.559492 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1f72968f1cf2: link becomes ready May 13 00:21:21.952518 systemd-networkd[1091]: cilium_vxlan: Gained IPv6LL May 13 00:21:22.070525 kubelet[2169]: E0513 00:21:22.070477 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:22.464515 systemd-networkd[1091]: lxc_health: Gained IPv6LL May 13 00:21:22.757378 kubelet[2169]: E0513 00:21:22.757024 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:22.912530 systemd-networkd[1091]: lxc411ade3f5f66: Gained IPv6LL May 13 00:21:22.912859 systemd-networkd[1091]: lxc1f72968f1cf2: Gained IPv6LL May 13 00:21:23.638874 systemd[1]: Started sshd@6-10.0.0.41:22-10.0.0.1:60482.service. May 13 00:21:23.696146 sshd[3379]: Accepted publickey for core from 10.0.0.1 port 60482 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:23.697591 sshd[3379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:23.702478 systemd[1]: Started session-7.scope. May 13 00:21:23.703554 systemd-logind[1299]: New session 7 of user core. May 13 00:21:23.759265 kubelet[2169]: E0513 00:21:23.759233 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:23.832648 sshd[3379]: pam_unix(sshd:session): session closed for user core May 13 00:21:23.835632 systemd[1]: sshd@6-10.0.0.41:22-10.0.0.1:60482.service: Deactivated successfully. May 13 00:21:23.836889 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:21:23.837510 systemd-logind[1299]: Session 7 logged out. Waiting for processes to exit. May 13 00:21:23.838336 systemd-logind[1299]: Removed session 7. May 13 00:21:25.128566 env[1314]: time="2025-05-13T00:21:25.128478835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:25.128997 env[1314]: time="2025-05-13T00:21:25.128548555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:25.128997 env[1314]: time="2025-05-13T00:21:25.128562434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:25.128997 env[1314]: time="2025-05-13T00:21:25.128756713Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6db7573618fa30ca49c4ae38dcda70822ca911944a87d0a1739ad37e74df7b5 pid=3420 runtime=io.containerd.runc.v2 May 13 00:21:25.129464 env[1314]: time="2025-05-13T00:21:25.129398148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:25.129574 env[1314]: time="2025-05-13T00:21:25.129549747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:25.129705 env[1314]: time="2025-05-13T00:21:25.129681426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:25.132154 env[1314]: time="2025-05-13T00:21:25.132115889Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b416061baafb325b1343c709357a7fe0b0443171c83f5b1eaefa70fd51ae6def pid=3421 runtime=io.containerd.runc.v2 May 13 00:21:25.189618 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:21:25.191676 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:21:25.210510 env[1314]: time="2025-05-13T00:21:25.210463963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8mpgc,Uid:e9b4fa2a-016c-46cc-9f6a-60eeda8a8675,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6db7573618fa30ca49c4ae38dcda70822ca911944a87d0a1739ad37e74df7b5\"" May 13 00:21:25.211990 kubelet[2169]: E0513 00:21:25.211963 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:25.214126 env[1314]: time="2025-05-13T00:21:25.214051817Z" level=info msg="CreateContainer within sandbox \"a6db7573618fa30ca49c4ae38dcda70822ca911944a87d0a1739ad37e74df7b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:21:25.214461 env[1314]: time="2025-05-13T00:21:25.213820219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pq5g5,Uid:20d220dc-db8c-4a1e-b56c-13a7a40a2461,Namespace:kube-system,Attempt:0,} returns sandbox id \"b416061baafb325b1343c709357a7fe0b0443171c83f5b1eaefa70fd51ae6def\"" May 13 00:21:25.215332 kubelet[2169]: E0513 00:21:25.215301 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:25.217517 env[1314]: time="2025-05-13T00:21:25.217151275Z" level=info msg="CreateContainer within sandbox \"b416061baafb325b1343c709357a7fe0b0443171c83f5b1eaefa70fd51ae6def\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:21:25.231399 env[1314]: time="2025-05-13T00:21:25.231339572Z" level=info msg="CreateContainer within sandbox \"a6db7573618fa30ca49c4ae38dcda70822ca911944a87d0a1739ad37e74df7b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc1450abb913ccc12339c00ef445a45ac284cd9f27b45e028220f42739b3f395\"" May 13 00:21:25.231917 env[1314]: time="2025-05-13T00:21:25.231855088Z" level=info msg="StartContainer for \"fc1450abb913ccc12339c00ef445a45ac284cd9f27b45e028220f42739b3f395\"" May 13 00:21:25.237192 env[1314]: time="2025-05-13T00:21:25.235693581Z" level=info msg="CreateContainer within sandbox \"b416061baafb325b1343c709357a7fe0b0443171c83f5b1eaefa70fd51ae6def\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb41fc52845c6a40b66a5ef89a27d30568ce8461c7ede5d307253e1d8422e725\"" May 13 00:21:25.237192 env[1314]: time="2025-05-13T00:21:25.236325856Z" level=info msg="StartContainer for \"bb41fc52845c6a40b66a5ef89a27d30568ce8461c7ede5d307253e1d8422e725\"" May 13 00:21:25.296560 env[1314]: time="2025-05-13T00:21:25.296511661Z" level=info msg="StartContainer for \"fc1450abb913ccc12339c00ef445a45ac284cd9f27b45e028220f42739b3f395\" returns successfully" May 13 00:21:25.305701 env[1314]: time="2025-05-13T00:21:25.305648875Z" level=info msg="StartContainer for \"bb41fc52845c6a40b66a5ef89a27d30568ce8461c7ede5d307253e1d8422e725\" returns successfully" May 13 00:21:25.763587 kubelet[2169]: E0513 00:21:25.763542 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:25.766842 kubelet[2169]: E0513 00:21:25.766748 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:25.819096 kubelet[2169]: I0513 00:21:25.819029 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pq5g5" podStartSLOduration=19.819014368 podStartE2EDuration="19.819014368s" podCreationTimestamp="2025-05-13 00:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:25.818513091 +0000 UTC m=+35.236159069" watchObservedRunningTime="2025-05-13 00:21:25.819014368 +0000 UTC m=+35.236660346" May 13 00:21:25.871837 kubelet[2169]: I0513 00:21:25.871763 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8mpgc" podStartSLOduration=19.871745547 podStartE2EDuration="19.871745547s" podCreationTimestamp="2025-05-13 00:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:25.870508516 +0000 UTC m=+35.288154494" watchObservedRunningTime="2025-05-13 00:21:25.871745547 +0000 UTC m=+35.289391525" May 13 00:21:26.768028 kubelet[2169]: E0513 00:21:26.767986 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:26.768759 kubelet[2169]: E0513 00:21:26.768739 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:27.770046 kubelet[2169]: E0513 00:21:27.770007 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:27.770498 kubelet[2169]: E0513 00:21:27.770476 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:28.835689 systemd[1]: Started sshd@7-10.0.0.41:22-10.0.0.1:60498.service. May 13 00:21:28.872478 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 60498 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:28.873707 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:28.876994 systemd-logind[1299]: New session 8 of user core. May 13 00:21:28.877901 systemd[1]: Started session-8.scope. May 13 00:21:28.988882 sshd[3571]: pam_unix(sshd:session): session closed for user core May 13 00:21:28.991641 systemd[1]: sshd@7-10.0.0.41:22-10.0.0.1:60498.service: Deactivated successfully. May 13 00:21:28.992554 systemd-logind[1299]: Session 8 logged out. Waiting for processes to exit. May 13 00:21:28.992610 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:21:28.993275 systemd-logind[1299]: Removed session 8. May 13 00:21:33.992229 systemd[1]: Started sshd@8-10.0.0.41:22-10.0.0.1:51608.service. May 13 00:21:34.026734 sshd[3588]: Accepted publickey for core from 10.0.0.1 port 51608 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:34.028387 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:34.033117 systemd-logind[1299]: New session 9 of user core. May 13 00:21:34.034026 systemd[1]: Started session-9.scope. May 13 00:21:34.148928 systemd[1]: Started sshd@9-10.0.0.41:22-10.0.0.1:51612.service. May 13 00:21:34.150101 sshd[3588]: pam_unix(sshd:session): session closed for user core May 13 00:21:34.152592 systemd[1]: sshd@8-10.0.0.41:22-10.0.0.1:51608.service: Deactivated successfully. May 13 00:21:34.153596 systemd-logind[1299]: Session 9 logged out. Waiting for processes to exit. May 13 00:21:34.153625 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:21:34.154235 systemd-logind[1299]: Removed session 9. May 13 00:21:34.184001 sshd[3602]: Accepted publickey for core from 10.0.0.1 port 51612 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:34.185587 sshd[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:34.189306 systemd-logind[1299]: New session 10 of user core. May 13 00:21:34.190266 systemd[1]: Started session-10.scope. May 13 00:21:34.345486 sshd[3602]: pam_unix(sshd:session): session closed for user core May 13 00:21:34.347795 systemd[1]: Started sshd@10-10.0.0.41:22-10.0.0.1:51628.service. May 13 00:21:34.356767 systemd[1]: sshd@9-10.0.0.41:22-10.0.0.1:51612.service: Deactivated successfully. May 13 00:21:34.358200 systemd-logind[1299]: Session 10 logged out. Waiting for processes to exit. May 13 00:21:34.358270 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:21:34.362774 systemd-logind[1299]: Removed session 10. May 13 00:21:34.391863 sshd[3614]: Accepted publickey for core from 10.0.0.1 port 51628 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:34.393043 sshd[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:34.396249 systemd-logind[1299]: New session 11 of user core. May 13 00:21:34.397192 systemd[1]: Started session-11.scope. May 13 00:21:34.507085 sshd[3614]: pam_unix(sshd:session): session closed for user core May 13 00:21:34.510093 systemd[1]: sshd@10-10.0.0.41:22-10.0.0.1:51628.service: Deactivated successfully. May 13 00:21:34.511154 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:21:34.511627 systemd-logind[1299]: Session 11 logged out. Waiting for processes to exit. May 13 00:21:34.512364 systemd-logind[1299]: Removed session 11. May 13 00:21:39.510083 systemd[1]: Started sshd@11-10.0.0.41:22-10.0.0.1:51634.service. May 13 00:21:39.543678 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 51634 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:39.544855 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:39.548420 systemd-logind[1299]: New session 12 of user core. May 13 00:21:39.548900 systemd[1]: Started session-12.scope. May 13 00:21:39.658927 sshd[3633]: pam_unix(sshd:session): session closed for user core May 13 00:21:39.661412 systemd[1]: sshd@11-10.0.0.41:22-10.0.0.1:51634.service: Deactivated successfully. May 13 00:21:39.662330 systemd-logind[1299]: Session 12 logged out. Waiting for processes to exit. May 13 00:21:39.662410 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:21:39.663046 systemd-logind[1299]: Removed session 12. May 13 00:21:44.662282 systemd[1]: Started sshd@12-10.0.0.41:22-10.0.0.1:40498.service. May 13 00:21:44.698889 sshd[3647]: Accepted publickey for core from 10.0.0.1 port 40498 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:44.700305 sshd[3647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:44.706036 systemd[1]: Started session-13.scope. May 13 00:21:44.707106 systemd-logind[1299]: New session 13 of user core. May 13 00:21:44.831601 sshd[3647]: pam_unix(sshd:session): session closed for user core May 13 00:21:44.833583 systemd[1]: Started sshd@13-10.0.0.41:22-10.0.0.1:40502.service. May 13 00:21:44.835965 systemd[1]: sshd@12-10.0.0.41:22-10.0.0.1:40498.service: Deactivated successfully. May 13 00:21:44.839150 systemd-logind[1299]: Session 13 logged out. Waiting for processes to exit. May 13 00:21:44.839276 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:21:44.840511 systemd-logind[1299]: Removed session 13. May 13 00:21:44.877588 sshd[3660]: Accepted publickey for core from 10.0.0.1 port 40502 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:44.879087 sshd[3660]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:44.885676 systemd[1]: Started session-14.scope. May 13 00:21:44.886126 systemd-logind[1299]: New session 14 of user core. May 13 00:21:45.160141 sshd[3660]: pam_unix(sshd:session): session closed for user core May 13 00:21:45.162294 systemd[1]: Started sshd@14-10.0.0.41:22-10.0.0.1:40518.service. May 13 00:21:45.163722 systemd[1]: sshd@13-10.0.0.41:22-10.0.0.1:40502.service: Deactivated successfully. May 13 00:21:45.164568 systemd-logind[1299]: Session 14 logged out. Waiting for processes to exit. May 13 00:21:45.164598 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:21:45.165270 systemd-logind[1299]: Removed session 14. May 13 00:21:45.200179 sshd[3672]: Accepted publickey for core from 10.0.0.1 port 40518 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:45.201641 sshd[3672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:45.204895 systemd-logind[1299]: New session 15 of user core. May 13 00:21:45.205671 systemd[1]: Started session-15.scope. May 13 00:21:46.644987 sshd[3672]: pam_unix(sshd:session): session closed for user core May 13 00:21:46.646958 systemd[1]: Started sshd@15-10.0.0.41:22-10.0.0.1:40524.service. May 13 00:21:46.650472 systemd[1]: sshd@14-10.0.0.41:22-10.0.0.1:40518.service: Deactivated successfully. May 13 00:21:46.652047 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:21:46.652407 systemd-logind[1299]: Session 15 logged out. Waiting for processes to exit. May 13 00:21:46.654477 systemd-logind[1299]: Removed session 15. May 13 00:21:46.685174 sshd[3694]: Accepted publickey for core from 10.0.0.1 port 40524 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:46.686482 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:46.690151 systemd-logind[1299]: New session 16 of user core. May 13 00:21:46.690951 systemd[1]: Started session-16.scope. May 13 00:21:46.901897 systemd[1]: Started sshd@16-10.0.0.41:22-10.0.0.1:40530.service. May 13 00:21:46.902417 sshd[3694]: pam_unix(sshd:session): session closed for user core May 13 00:21:46.911585 systemd[1]: sshd@15-10.0.0.41:22-10.0.0.1:40524.service: Deactivated successfully. May 13 00:21:46.912466 systemd-logind[1299]: Session 16 logged out. Waiting for processes to exit. May 13 00:21:46.912506 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:21:46.913645 systemd-logind[1299]: Removed session 16. May 13 00:21:46.938273 sshd[3707]: Accepted publickey for core from 10.0.0.1 port 40530 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:46.939778 sshd[3707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:46.942733 systemd-logind[1299]: New session 17 of user core. May 13 00:21:46.943471 systemd[1]: Started session-17.scope. May 13 00:21:47.051927 sshd[3707]: pam_unix(sshd:session): session closed for user core May 13 00:21:47.054193 systemd[1]: sshd@16-10.0.0.41:22-10.0.0.1:40530.service: Deactivated successfully. May 13 00:21:47.055126 systemd-logind[1299]: Session 17 logged out. Waiting for processes to exit. May 13 00:21:47.055193 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:21:47.056229 systemd-logind[1299]: Removed session 17. May 13 00:21:52.055168 systemd[1]: Started sshd@17-10.0.0.41:22-10.0.0.1:40532.service. May 13 00:21:52.089227 sshd[3725]: Accepted publickey for core from 10.0.0.1 port 40532 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:52.090782 sshd[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:52.094657 systemd-logind[1299]: New session 18 of user core. May 13 00:21:52.094882 systemd[1]: Started session-18.scope. May 13 00:21:52.199703 sshd[3725]: pam_unix(sshd:session): session closed for user core May 13 00:21:52.202109 systemd[1]: sshd@17-10.0.0.41:22-10.0.0.1:40532.service: Deactivated successfully. May 13 00:21:52.203071 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:21:52.203076 systemd-logind[1299]: Session 18 logged out. Waiting for processes to exit. May 13 00:21:52.204022 systemd-logind[1299]: Removed session 18. May 13 00:21:57.202503 systemd[1]: Started sshd@18-10.0.0.41:22-10.0.0.1:56590.service. May 13 00:21:57.236820 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 56590 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:21:57.238405 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:57.241946 systemd-logind[1299]: New session 19 of user core. May 13 00:21:57.242624 systemd[1]: Started session-19.scope. May 13 00:21:57.347202 sshd[3742]: pam_unix(sshd:session): session closed for user core May 13 00:21:57.349600 systemd[1]: sshd@18-10.0.0.41:22-10.0.0.1:56590.service: Deactivated successfully. May 13 00:21:57.350547 systemd-logind[1299]: Session 19 logged out. Waiting for processes to exit. May 13 00:21:57.350601 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:21:57.351442 systemd-logind[1299]: Removed session 19. May 13 00:22:02.350177 systemd[1]: Started sshd@19-10.0.0.41:22-10.0.0.1:56596.service. May 13 00:22:02.385075 sshd[3757]: Accepted publickey for core from 10.0.0.1 port 56596 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:22:02.386411 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:22:02.390124 systemd-logind[1299]: New session 20 of user core. May 13 00:22:02.390566 systemd[1]: Started session-20.scope. May 13 00:22:02.502737 sshd[3757]: pam_unix(sshd:session): session closed for user core May 13 00:22:02.505153 systemd[1]: sshd@19-10.0.0.41:22-10.0.0.1:56596.service: Deactivated successfully. May 13 00:22:02.506169 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:22:02.506174 systemd-logind[1299]: Session 20 logged out. Waiting for processes to exit. May 13 00:22:02.507073 systemd-logind[1299]: Removed session 20. May 13 00:22:06.685675 kubelet[2169]: E0513 00:22:06.685640 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:07.505757 systemd[1]: Started sshd@20-10.0.0.41:22-10.0.0.1:56074.service. May 13 00:22:07.540821 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 56074 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:22:07.542147 sshd[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:22:07.545992 systemd-logind[1299]: New session 21 of user core. May 13 00:22:07.546729 systemd[1]: Started session-21.scope. May 13 00:22:07.652716 sshd[3773]: pam_unix(sshd:session): session closed for user core May 13 00:22:07.654641 systemd[1]: Started sshd@21-10.0.0.41:22-10.0.0.1:56082.service. May 13 00:22:07.656551 systemd-logind[1299]: Session 21 logged out. Waiting for processes to exit. May 13 00:22:07.656739 systemd[1]: sshd@20-10.0.0.41:22-10.0.0.1:56074.service: Deactivated successfully. May 13 00:22:07.657516 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:22:07.657968 systemd-logind[1299]: Removed session 21. May 13 00:22:07.689208 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 56082 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:22:07.690689 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:22:07.695303 systemd-logind[1299]: New session 22 of user core. May 13 00:22:07.696416 systemd[1]: Started session-22.scope. May 13 00:22:09.455671 env[1314]: time="2025-05-13T00:22:09.455629593Z" level=info msg="StopContainer for \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\" with timeout 30 (s)" May 13 00:22:09.456765 env[1314]: time="2025-05-13T00:22:09.456445365Z" level=info msg="Stop container \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\" with signal terminated" May 13 00:22:09.460333 systemd[1]: run-containerd-runc-k8s.io-3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913-runc.pQJRiP.mount: Deactivated successfully. May 13 00:22:09.488267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0-rootfs.mount: Deactivated successfully. May 13 00:22:09.490987 env[1314]: time="2025-05-13T00:22:09.490927814Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:22:09.494199 env[1314]: time="2025-05-13T00:22:09.494157824Z" level=info msg="shim disconnected" id=b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0 May 13 00:22:09.494199 env[1314]: time="2025-05-13T00:22:09.494200624Z" level=warning msg="cleaning up after shim disconnected" id=b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0 namespace=k8s.io May 13 00:22:09.494317 env[1314]: time="2025-05-13T00:22:09.494210505Z" level=info msg="cleaning up dead shim" May 13 00:22:09.496248 env[1314]: time="2025-05-13T00:22:09.496218135Z" level=info msg="StopContainer for \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\" with timeout 2 (s)" May 13 00:22:09.496536 env[1314]: time="2025-05-13T00:22:09.496512500Z" level=info msg="Stop container \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\" with signal terminated" May 13 00:22:09.501989 systemd-networkd[1091]: lxc_health: Link DOWN May 13 00:22:09.501995 systemd-networkd[1091]: lxc_health: Lost carrier May 13 00:22:09.504264 env[1314]: time="2025-05-13T00:22:09.504220858Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:22:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3837 runtime=io.containerd.runc.v2\n" May 13 00:22:09.506564 env[1314]: time="2025-05-13T00:22:09.506527213Z" level=info msg="StopContainer for \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\" returns successfully" May 13 00:22:09.507085 env[1314]: time="2025-05-13T00:22:09.507051821Z" level=info msg="StopPodSandbox for \"cde5a017cbf6264d90475aa8a0d1f036a4104ecbc171ce1ffa5599e3a6ec86d1\"" May 13 00:22:09.507131 env[1314]: time="2025-05-13T00:22:09.507112702Z" level=info msg="Container to stop \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:22:09.508947 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cde5a017cbf6264d90475aa8a0d1f036a4104ecbc171ce1ffa5599e3a6ec86d1-shm.mount: Deactivated successfully. May 13 00:22:09.545331 env[1314]: time="2025-05-13T00:22:09.545281568Z" level=info msg="shim disconnected" id=cde5a017cbf6264d90475aa8a0d1f036a4104ecbc171ce1ffa5599e3a6ec86d1 May 13 00:22:09.545331 env[1314]: time="2025-05-13T00:22:09.545321248Z" level=warning msg="cleaning up after shim disconnected" id=cde5a017cbf6264d90475aa8a0d1f036a4104ecbc171ce1ffa5599e3a6ec86d1 namespace=k8s.io May 13 00:22:09.545331 env[1314]: time="2025-05-13T00:22:09.545331768Z" level=info msg="cleaning up dead shim" May 13 00:22:09.546146 env[1314]: time="2025-05-13T00:22:09.546115860Z" level=info msg="shim disconnected" id=3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913 May 13 00:22:09.546234 env[1314]: time="2025-05-13T00:22:09.546218062Z" level=warning msg="cleaning up after shim disconnected" id=3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913 namespace=k8s.io May 13 00:22:09.546297 env[1314]: time="2025-05-13T00:22:09.546275023Z" level=info msg="cleaning up dead shim" May 13 00:22:09.553523 env[1314]: time="2025-05-13T00:22:09.553482253Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:22:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3892 runtime=io.containerd.runc.v2\n" May 13 00:22:09.553821 env[1314]: time="2025-05-13T00:22:09.553792098Z" level=info msg="TearDown network for sandbox \"cde5a017cbf6264d90475aa8a0d1f036a4104ecbc171ce1ffa5599e3a6ec86d1\" successfully" May 13 00:22:09.553821 env[1314]: time="2025-05-13T00:22:09.553818259Z" level=info msg="StopPodSandbox for \"cde5a017cbf6264d90475aa8a0d1f036a4104ecbc171ce1ffa5599e3a6ec86d1\" returns successfully" May 13 00:22:09.557695 env[1314]: time="2025-05-13T00:22:09.557663598Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:22:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3893 runtime=io.containerd.runc.v2\n" May 13 00:22:09.561390 env[1314]: time="2025-05-13T00:22:09.559595427Z" level=info msg="StopContainer for \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\" returns successfully" May 13 00:22:09.561637 env[1314]: time="2025-05-13T00:22:09.561607698Z" level=info msg="StopPodSandbox for \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\"" May 13 00:22:09.561757 env[1314]: time="2025-05-13T00:22:09.561738100Z" level=info msg="Container to stop \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:22:09.561822 env[1314]: time="2025-05-13T00:22:09.561806701Z" level=info msg="Container to stop \"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:22:09.561879 env[1314]: time="2025-05-13T00:22:09.561864022Z" level=info msg="Container to stop \"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:22:09.561953 env[1314]: time="2025-05-13T00:22:09.561936623Z" level=info msg="Container to stop \"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:22:09.562015 env[1314]: time="2025-05-13T00:22:09.562000184Z" level=info msg="Container to stop \"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:22:09.583220 env[1314]: time="2025-05-13T00:22:09.583175029Z" level=info msg="shim disconnected" id=1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666 May 13 00:22:09.583527 env[1314]: time="2025-05-13T00:22:09.583506434Z" level=warning msg="cleaning up after shim disconnected" id=1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666 namespace=k8s.io May 13 00:22:09.583611 env[1314]: time="2025-05-13T00:22:09.583597395Z" level=info msg="cleaning up dead shim" May 13 00:22:09.590848 env[1314]: time="2025-05-13T00:22:09.590814466Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:22:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3937 runtime=io.containerd.runc.v2\n" May 13 00:22:09.591254 env[1314]: time="2025-05-13T00:22:09.591226192Z" level=info msg="TearDown network for sandbox \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" successfully" May 13 00:22:09.591340 env[1314]: time="2025-05-13T00:22:09.591322794Z" level=info msg="StopPodSandbox for \"1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666\" returns successfully" May 13 00:22:09.753983 kubelet[2169]: I0513 00:22:09.753089 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-xtables-lock\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.753983 kubelet[2169]: I0513 00:22:09.753145 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-config-path\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.753983 kubelet[2169]: I0513 00:22:09.753165 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cni-path\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.753983 kubelet[2169]: I0513 00:22:09.753182 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-bpf-maps\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.753983 kubelet[2169]: I0513 00:22:09.753201 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff2c8254-a99f-4511-8494-ecb1d0d05676-hubble-tls\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.753983 kubelet[2169]: I0513 00:22:09.753215 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-host-proc-sys-net\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.754591 kubelet[2169]: I0513 00:22:09.753231 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-hostproc\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.754591 kubelet[2169]: I0513 00:22:09.753246 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-lib-modules\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.754591 kubelet[2169]: I0513 00:22:09.753267 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff2c8254-a99f-4511-8494-ecb1d0d05676-clustermesh-secrets\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.754591 kubelet[2169]: I0513 00:22:09.753356 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-host-proc-sys-kernel\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.754591 kubelet[2169]: I0513 00:22:09.753407 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6864ada0-620f-45a3-b2fd-26f713126f11-cilium-config-path\") pod \"6864ada0-620f-45a3-b2fd-26f713126f11\" (UID: \"6864ada0-620f-45a3-b2fd-26f713126f11\") " May 13 00:22:09.754591 kubelet[2169]: I0513 00:22:09.753427 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrtxb\" (UniqueName: \"kubernetes.io/projected/ff2c8254-a99f-4511-8494-ecb1d0d05676-kube-api-access-hrtxb\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.754792 kubelet[2169]: I0513 00:22:09.753442 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-etc-cni-netd\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.754792 kubelet[2169]: I0513 00:22:09.753457 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-cgroup\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.754792 kubelet[2169]: I0513 00:22:09.753471 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-run\") pod \"ff2c8254-a99f-4511-8494-ecb1d0d05676\" (UID: \"ff2c8254-a99f-4511-8494-ecb1d0d05676\") " May 13 00:22:09.754792 kubelet[2169]: I0513 00:22:09.753514 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcv2m\" (UniqueName: \"kubernetes.io/projected/6864ada0-620f-45a3-b2fd-26f713126f11-kube-api-access-bcv2m\") pod \"6864ada0-620f-45a3-b2fd-26f713126f11\" (UID: \"6864ada0-620f-45a3-b2fd-26f713126f11\") " May 13 00:22:09.761516 kubelet[2169]: I0513 00:22:09.761467 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:22:09.761516 kubelet[2169]: I0513 00:22:09.761484 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:09.761666 kubelet[2169]: I0513 00:22:09.761529 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:09.761666 kubelet[2169]: I0513 00:22:09.761545 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:09.761666 kubelet[2169]: I0513 00:22:09.761562 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cni-path" (OuterVolumeSpecName: "cni-path") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:09.761666 kubelet[2169]: I0513 00:22:09.761575 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:09.761666 kubelet[2169]: I0513 00:22:09.761592 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-hostproc" (OuterVolumeSpecName: "hostproc") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:09.761794 kubelet[2169]: I0513 00:22:09.761613 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:09.761794 kubelet[2169]: I0513 00:22:09.761628 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:09.763426 kubelet[2169]: I0513 00:22:09.763333 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6864ada0-620f-45a3-b2fd-26f713126f11-kube-api-access-bcv2m" (OuterVolumeSpecName: "kube-api-access-bcv2m") pod "6864ada0-620f-45a3-b2fd-26f713126f11" (UID: "6864ada0-620f-45a3-b2fd-26f713126f11"). InnerVolumeSpecName "kube-api-access-bcv2m". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:22:09.763426 kubelet[2169]: I0513 00:22:09.763410 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6864ada0-620f-45a3-b2fd-26f713126f11-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6864ada0-620f-45a3-b2fd-26f713126f11" (UID: "6864ada0-620f-45a3-b2fd-26f713126f11"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:22:09.763426 kubelet[2169]: I0513 00:22:09.763423 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:09.763560 kubelet[2169]: I0513 00:22:09.763444 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:09.764110 kubelet[2169]: I0513 00:22:09.764083 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2c8254-a99f-4511-8494-ecb1d0d05676-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:22:09.765128 kubelet[2169]: I0513 00:22:09.765084 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2c8254-a99f-4511-8494-ecb1d0d05676-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:22:09.765896 kubelet[2169]: I0513 00:22:09.765862 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2c8254-a99f-4511-8494-ecb1d0d05676-kube-api-access-hrtxb" (OuterVolumeSpecName: "kube-api-access-hrtxb") pod "ff2c8254-a99f-4511-8494-ecb1d0d05676" (UID: "ff2c8254-a99f-4511-8494-ecb1d0d05676"). InnerVolumeSpecName "kube-api-access-hrtxb". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:22:09.850239 kubelet[2169]: I0513 00:22:09.850207 2169 scope.go:117] "RemoveContainer" containerID="3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913" May 13 00:22:09.855658 env[1314]: time="2025-05-13T00:22:09.855608686Z" level=info msg="RemoveContainer for \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\"" May 13 00:22:09.856769 kubelet[2169]: I0513 00:22:09.856682 2169 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.856769 kubelet[2169]: I0513 00:22:09.856751 2169 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.856769 kubelet[2169]: I0513 00:22:09.856760 2169 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.856769 kubelet[2169]: I0513 00:22:09.856770 2169 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.856769 kubelet[2169]: I0513 00:22:09.856777 2169 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff2c8254-a99f-4511-8494-ecb1d0d05676-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.857174 kubelet[2169]: I0513 00:22:09.856786 2169 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff2c8254-a99f-4511-8494-ecb1d0d05676-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.857174 kubelet[2169]: I0513 00:22:09.856794 2169 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.857174 kubelet[2169]: I0513 00:22:09.856802 2169 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.857174 kubelet[2169]: I0513 00:22:09.856809 2169 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hrtxb\" (UniqueName: \"kubernetes.io/projected/ff2c8254-a99f-4511-8494-ecb1d0d05676-kube-api-access-hrtxb\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.857174 kubelet[2169]: I0513 00:22:09.856817 2169 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6864ada0-620f-45a3-b2fd-26f713126f11-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.857174 kubelet[2169]: I0513 00:22:09.856825 2169 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.857174 kubelet[2169]: I0513 00:22:09.856832 2169 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.857174 kubelet[2169]: I0513 00:22:09.856839 2169 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.857351 kubelet[2169]: I0513 00:22:09.856846 2169 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bcv2m\" (UniqueName: \"kubernetes.io/projected/6864ada0-620f-45a3-b2fd-26f713126f11-kube-api-access-bcv2m\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.857351 kubelet[2169]: I0513 00:22:09.856854 2169 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff2c8254-a99f-4511-8494-ecb1d0d05676-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.857351 kubelet[2169]: I0513 00:22:09.856861 2169 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff2c8254-a99f-4511-8494-ecb1d0d05676-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:22:09.862648 env[1314]: time="2025-05-13T00:22:09.862605033Z" level=info msg="RemoveContainer for \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\" returns successfully" May 13 00:22:09.862866 kubelet[2169]: I0513 00:22:09.862839 2169 scope.go:117] "RemoveContainer" containerID="95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3" May 13 00:22:09.866187 env[1314]: time="2025-05-13T00:22:09.866153728Z" level=info msg="RemoveContainer for \"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3\"" May 13 00:22:09.870334 env[1314]: time="2025-05-13T00:22:09.870294311Z" level=info msg="RemoveContainer for \"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3\" returns successfully" May 13 00:22:09.870503 kubelet[2169]: I0513 00:22:09.870478 2169 scope.go:117] "RemoveContainer" containerID="06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6" May 13 00:22:09.872290 env[1314]: time="2025-05-13T00:22:09.872263861Z" level=info msg="RemoveContainer for \"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6\"" May 13 00:22:09.875257 env[1314]: time="2025-05-13T00:22:09.875217907Z" level=info msg="RemoveContainer for \"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6\" returns successfully" May 13 00:22:09.875434 kubelet[2169]: I0513 00:22:09.875407 2169 scope.go:117] "RemoveContainer" containerID="0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f" May 13 00:22:09.878236 env[1314]: time="2025-05-13T00:22:09.878180952Z" level=info msg="RemoveContainer for \"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f\"" May 13 00:22:09.880733 env[1314]: time="2025-05-13T00:22:09.880694271Z" level=info msg="RemoveContainer for \"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f\" returns successfully" May 13 00:22:09.880977 kubelet[2169]: I0513 00:22:09.880953 2169 scope.go:117] "RemoveContainer" containerID="dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17" May 13 00:22:09.882108 env[1314]: time="2025-05-13T00:22:09.882083132Z" level=info msg="RemoveContainer for \"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17\"" May 13 00:22:09.884892 env[1314]: time="2025-05-13T00:22:09.884850654Z" level=info msg="RemoveContainer for \"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17\" returns successfully" May 13 00:22:09.888966 kubelet[2169]: I0513 00:22:09.888940 2169 scope.go:117] "RemoveContainer" containerID="3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913" May 13 00:22:09.889324 env[1314]: time="2025-05-13T00:22:09.889245802Z" level=error msg="ContainerStatus for \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\": not found" May 13 00:22:09.890069 kubelet[2169]: E0513 00:22:09.890039 2169 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\": not found" containerID="3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913" May 13 00:22:09.890160 kubelet[2169]: I0513 00:22:09.890079 2169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913"} err="failed to get container status \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\": rpc error: code = NotFound desc = an error occurred when try to find container \"3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913\": not found" May 13 00:22:09.890199 kubelet[2169]: I0513 00:22:09.890163 2169 scope.go:117] "RemoveContainer" containerID="95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3" May 13 00:22:09.890422 env[1314]: time="2025-05-13T00:22:09.890353699Z" level=error msg="ContainerStatus for \"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3\": not found" May 13 00:22:09.890562 kubelet[2169]: E0513 00:22:09.890540 2169 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3\": not found" containerID="95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3" May 13 00:22:09.890602 kubelet[2169]: I0513 00:22:09.890570 2169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3"} err="failed to get container status \"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"95d7b92f3b58bf8b3eb6f8331b98b7f7954ef9f307364f4a90e25a08b19729f3\": not found" May 13 00:22:09.890826 kubelet[2169]: I0513 00:22:09.890766 2169 scope.go:117] "RemoveContainer" containerID="06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6" May 13 00:22:09.891030 env[1314]: time="2025-05-13T00:22:09.890978268Z" level=error msg="ContainerStatus for \"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6\": not found" May 13 00:22:09.891228 kubelet[2169]: E0513 00:22:09.891205 2169 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6\": not found" containerID="06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6" May 13 00:22:09.891272 kubelet[2169]: I0513 00:22:09.891235 2169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6"} err="failed to get container status \"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6\": rpc error: code = NotFound desc = an error occurred when try to find container \"06675995ca16d27efc76d7d4718a5ea6cf1a771f4410da9b75bf8b5f84247ca6\": not found" May 13 00:22:09.891272 kubelet[2169]: I0513 00:22:09.891251 2169 scope.go:117] "RemoveContainer" containerID="0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f" May 13 00:22:09.891504 env[1314]: time="2025-05-13T00:22:09.891455516Z" level=error msg="ContainerStatus for \"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f\": not found" May 13 00:22:09.891636 kubelet[2169]: E0513 00:22:09.891609 2169 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f\": not found" containerID="0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f" May 13 00:22:09.891671 kubelet[2169]: I0513 00:22:09.891640 2169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f"} err="failed to get container status \"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e37a746ee9a3699248b42a560763d0c135409dc14bf8d649f227ffb95312f7f\": not found" May 13 00:22:09.891671 kubelet[2169]: I0513 00:22:09.891654 2169 scope.go:117] "RemoveContainer" containerID="dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17" May 13 00:22:09.891845 env[1314]: time="2025-05-13T00:22:09.891803361Z" level=error msg="ContainerStatus for \"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17\": not found" May 13 00:22:09.891981 kubelet[2169]: E0513 00:22:09.891959 2169 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17\": not found" containerID="dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17" May 13 00:22:09.892018 kubelet[2169]: I0513 00:22:09.891984 2169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17"} err="failed to get container status \"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfcb991be138f4d4b55dd3bdc540bf8daa8e5fcddc882b25b0b2ea6be0c29e17\": not found" May 13 00:22:09.892018 kubelet[2169]: I0513 00:22:09.891999 2169 scope.go:117] "RemoveContainer" containerID="b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0" May 13 00:22:09.893178 env[1314]: time="2025-05-13T00:22:09.893151982Z" level=info msg="RemoveContainer for \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\"" May 13 00:22:09.895734 env[1314]: time="2025-05-13T00:22:09.895676220Z" level=info msg="RemoveContainer for \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\" returns successfully" May 13 00:22:09.896000 kubelet[2169]: I0513 00:22:09.895963 2169 scope.go:117] "RemoveContainer" containerID="b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0" May 13 00:22:09.896225 env[1314]: time="2025-05-13T00:22:09.896165708Z" level=error msg="ContainerStatus for \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\": not found" May 13 00:22:09.896329 kubelet[2169]: E0513 00:22:09.896311 2169 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\": not found" containerID="b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0" May 13 00:22:09.896385 kubelet[2169]: I0513 00:22:09.896335 2169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0"} err="failed to get container status \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\": rpc error: code = NotFound desc = an error occurred when try to find container \"b099b001cde83c13e1f25e3da943e244684f593fc875ab03a8727e2012c04cf0\": not found" May 13 00:22:10.451272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3048a94e5cb1e6e2904b8be0fc5adef1c4452b36b912226a6c8c5f86ffce2913-rootfs.mount: Deactivated successfully. May 13 00:22:10.451446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cde5a017cbf6264d90475aa8a0d1f036a4104ecbc171ce1ffa5599e3a6ec86d1-rootfs.mount: Deactivated successfully. May 13 00:22:10.451528 systemd[1]: var-lib-kubelet-pods-6864ada0\x2d620f\x2d45a3\x2db2fd\x2d26f713126f11-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbcv2m.mount: Deactivated successfully. May 13 00:22:10.451607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666-rootfs.mount: Deactivated successfully. May 13 00:22:10.451683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1078f888761de10ebd6585807bed0c8af401b979d7dec14947c13da0f62c4666-shm.mount: Deactivated successfully. May 13 00:22:10.451759 systemd[1]: var-lib-kubelet-pods-ff2c8254\x2da99f\x2d4511\x2d8494\x2decb1d0d05676-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhrtxb.mount: Deactivated successfully. May 13 00:22:10.451835 systemd[1]: var-lib-kubelet-pods-ff2c8254\x2da99f\x2d4511\x2d8494\x2decb1d0d05676-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:22:10.451927 systemd[1]: var-lib-kubelet-pods-ff2c8254\x2da99f\x2d4511\x2d8494\x2decb1d0d05676-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:22:10.687391 kubelet[2169]: I0513 00:22:10.687337 2169 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6864ada0-620f-45a3-b2fd-26f713126f11" path="/var/lib/kubelet/pods/6864ada0-620f-45a3-b2fd-26f713126f11/volumes" May 13 00:22:10.688262 kubelet[2169]: I0513 00:22:10.687731 2169 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff2c8254-a99f-4511-8494-ecb1d0d05676" path="/var/lib/kubelet/pods/ff2c8254-a99f-4511-8494-ecb1d0d05676/volumes" May 13 00:22:10.736186 kubelet[2169]: E0513 00:22:10.736078 2169 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:22:11.406847 sshd[3786]: pam_unix(sshd:session): session closed for user core May 13 00:22:11.409111 systemd[1]: Started sshd@22-10.0.0.41:22-10.0.0.1:56096.service. May 13 00:22:11.411429 systemd[1]: sshd@21-10.0.0.41:22-10.0.0.1:56082.service: Deactivated successfully. May 13 00:22:11.413103 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:22:11.413122 systemd-logind[1299]: Session 22 logged out. Waiting for processes to exit. May 13 00:22:11.414472 systemd-logind[1299]: Removed session 22. May 13 00:22:11.445610 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 56096 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:22:11.446772 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:22:11.450291 systemd-logind[1299]: New session 23 of user core. May 13 00:22:11.451135 systemd[1]: Started session-23.scope. May 13 00:22:12.298998 kubelet[2169]: I0513 00:22:12.298952 2169 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:22:12Z","lastTransitionTime":"2025-05-13T00:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:22:12.769768 sshd[3954]: pam_unix(sshd:session): session closed for user core May 13 00:22:12.771418 systemd[1]: Started sshd@23-10.0.0.41:22-10.0.0.1:40728.service. May 13 00:22:12.772478 kubelet[2169]: I0513 00:22:12.772433 2169 topology_manager.go:215] "Topology Admit Handler" podUID="7ca48a3b-3bfb-469c-a055-9d7688399b05" podNamespace="kube-system" podName="cilium-wn4v6" May 13 00:22:12.772580 kubelet[2169]: E0513 00:22:12.772559 2169 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff2c8254-a99f-4511-8494-ecb1d0d05676" containerName="apply-sysctl-overwrites" May 13 00:22:12.772580 kubelet[2169]: E0513 00:22:12.772569 2169 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6864ada0-620f-45a3-b2fd-26f713126f11" containerName="cilium-operator" May 13 00:22:12.772580 kubelet[2169]: E0513 00:22:12.772575 2169 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff2c8254-a99f-4511-8494-ecb1d0d05676" containerName="mount-bpf-fs" May 13 00:22:12.772580 kubelet[2169]: E0513 00:22:12.772581 2169 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff2c8254-a99f-4511-8494-ecb1d0d05676" containerName="clean-cilium-state" May 13 00:22:12.772670 kubelet[2169]: E0513 00:22:12.772587 2169 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff2c8254-a99f-4511-8494-ecb1d0d05676" containerName="cilium-agent" May 13 00:22:12.772670 kubelet[2169]: E0513 00:22:12.772594 2169 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff2c8254-a99f-4511-8494-ecb1d0d05676" containerName="mount-cgroup" May 13 00:22:12.772670 kubelet[2169]: I0513 00:22:12.772613 2169 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2c8254-a99f-4511-8494-ecb1d0d05676" containerName="cilium-agent" May 13 00:22:12.772670 kubelet[2169]: I0513 00:22:12.772618 2169 memory_manager.go:354] "RemoveStaleState removing state" podUID="6864ada0-620f-45a3-b2fd-26f713126f11" containerName="cilium-operator" May 13 00:22:12.785229 systemd[1]: sshd@22-10.0.0.41:22-10.0.0.1:56096.service: Deactivated successfully. May 13 00:22:12.786868 systemd-logind[1299]: Session 23 logged out. Waiting for processes to exit. May 13 00:22:12.786919 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:22:12.788765 systemd-logind[1299]: Removed session 23. May 13 00:22:12.832060 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 40728 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:22:12.833317 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:22:12.836825 systemd-logind[1299]: New session 24 of user core. May 13 00:22:12.837635 systemd[1]: Started session-24.scope. May 13 00:22:12.876134 kubelet[2169]: I0513 00:22:12.876091 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-run\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876134 kubelet[2169]: I0513 00:22:12.876137 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ca48a3b-3bfb-469c-a055-9d7688399b05-clustermesh-secrets\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876267 kubelet[2169]: I0513 00:22:12.876160 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-ipsec-secrets\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876267 kubelet[2169]: I0513 00:22:12.876178 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-host-proc-sys-net\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876267 kubelet[2169]: I0513 00:22:12.876195 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-host-proc-sys-kernel\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876267 kubelet[2169]: I0513 00:22:12.876215 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-bpf-maps\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876267 kubelet[2169]: I0513 00:22:12.876243 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-lib-modules\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876267 kubelet[2169]: I0513 00:22:12.876262 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-etc-cni-netd\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876454 kubelet[2169]: I0513 00:22:12.876277 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-hostproc\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876454 kubelet[2169]: I0513 00:22:12.876294 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cni-path\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876454 kubelet[2169]: I0513 00:22:12.876344 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-cgroup\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876454 kubelet[2169]: I0513 00:22:12.876402 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-xtables-lock\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876454 kubelet[2169]: I0513 00:22:12.876431 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-config-path\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876454 kubelet[2169]: I0513 00:22:12.876453 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mm6\" (UniqueName: \"kubernetes.io/projected/7ca48a3b-3bfb-469c-a055-9d7688399b05-kube-api-access-w5mm6\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.876584 kubelet[2169]: I0513 00:22:12.876470 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ca48a3b-3bfb-469c-a055-9d7688399b05-hubble-tls\") pod \"cilium-wn4v6\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " pod="kube-system/cilium-wn4v6" May 13 00:22:12.960633 sshd[3968]: pam_unix(sshd:session): session closed for user core May 13 00:22:12.961931 systemd[1]: Started sshd@24-10.0.0.41:22-10.0.0.1:40740.service. May 13 00:22:12.966519 systemd[1]: sshd@23-10.0.0.41:22-10.0.0.1:40728.service: Deactivated successfully. May 13 00:22:12.967397 systemd-logind[1299]: Session 24 logged out. Waiting for processes to exit. May 13 00:22:12.967479 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:22:12.970649 systemd-logind[1299]: Removed session 24. May 13 00:22:12.971063 kubelet[2169]: E0513 00:22:12.970891 2169 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-w5mm6 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-wn4v6" podUID="7ca48a3b-3bfb-469c-a055-9d7688399b05" May 13 00:22:13.003948 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 40740 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:22:13.005241 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:22:13.011600 systemd[1]: Started session-25.scope. May 13 00:22:13.011811 systemd-logind[1299]: New session 25 of user core. May 13 00:22:13.987901 kubelet[2169]: I0513 00:22:13.987858 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-host-proc-sys-net\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.988358 kubelet[2169]: I0513 00:22:13.988337 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-cgroup\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.988476 kubelet[2169]: I0513 00:22:13.988461 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-etc-cni-netd\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.988555 kubelet[2169]: I0513 00:22:13.988543 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cni-path\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.988645 kubelet[2169]: I0513 00:22:13.988632 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-bpf-maps\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.988734 kubelet[2169]: I0513 00:22:13.988713 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ca48a3b-3bfb-469c-a055-9d7688399b05-hubble-tls\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.988827 kubelet[2169]: I0513 00:22:13.988813 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-hostproc\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.988894 kubelet[2169]: I0513 00:22:13.988882 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-xtables-lock\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.988967 kubelet[2169]: I0513 00:22:13.987984 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:13.989006 kubelet[2169]: I0513 00:22:13.988398 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:13.989006 kubelet[2169]: I0513 00:22:13.988518 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:13.989006 kubelet[2169]: I0513 00:22:13.988592 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cni-path" (OuterVolumeSpecName: "cni-path") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:13.989006 kubelet[2169]: I0513 00:22:13.988685 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:13.989103 kubelet[2169]: I0513 00:22:13.988893 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-hostproc" (OuterVolumeSpecName: "hostproc") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:13.989103 kubelet[2169]: I0513 00:22:13.988917 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:13.989176 kubelet[2169]: I0513 00:22:13.989161 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-config-path\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.989251 kubelet[2169]: I0513 00:22:13.989238 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-run\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.989315 kubelet[2169]: I0513 00:22:13.989304 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-host-proc-sys-kernel\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.989382 kubelet[2169]: I0513 00:22:13.989268 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:13.989444 kubelet[2169]: I0513 00:22:13.989431 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-ipsec-secrets\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.989525 kubelet[2169]: I0513 00:22:13.989513 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ca48a3b-3bfb-469c-a055-9d7688399b05-clustermesh-secrets\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.989601 kubelet[2169]: I0513 00:22:13.989587 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5mm6\" (UniqueName: \"kubernetes.io/projected/7ca48a3b-3bfb-469c-a055-9d7688399b05-kube-api-access-w5mm6\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.989667 kubelet[2169]: I0513 00:22:13.989653 2169 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-lib-modules\") pod \"7ca48a3b-3bfb-469c-a055-9d7688399b05\" (UID: \"7ca48a3b-3bfb-469c-a055-9d7688399b05\") " May 13 00:22:13.989764 kubelet[2169]: I0513 00:22:13.989750 2169 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:22:13.989828 kubelet[2169]: I0513 00:22:13.989817 2169 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:22:13.989894 kubelet[2169]: I0513 00:22:13.989883 2169 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:22:13.989970 kubelet[2169]: I0513 00:22:13.989956 2169 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:22:13.990034 kubelet[2169]: I0513 00:22:13.990023 2169 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:22:13.990092 kubelet[2169]: I0513 00:22:13.990082 2169 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:22:13.990146 kubelet[2169]: I0513 00:22:13.990136 2169 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:22:13.990201 kubelet[2169]: I0513 00:22:13.990191 2169 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:22:13.990294 kubelet[2169]: I0513 00:22:13.989468 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:13.990356 kubelet[2169]: I0513 00:22:13.990279 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:22:13.990628 kubelet[2169]: I0513 00:22:13.990565 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:22:13.992555 kubelet[2169]: I0513 00:22:13.992530 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:22:13.993142 systemd[1]: var-lib-kubelet-pods-7ca48a3b\x2d3bfb\x2d469c\x2da055\x2d9d7688399b05-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 00:22:13.993285 systemd[1]: var-lib-kubelet-pods-7ca48a3b\x2d3bfb\x2d469c\x2da055\x2d9d7688399b05-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:22:13.993694 kubelet[2169]: I0513 00:22:13.993672 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca48a3b-3bfb-469c-a055-9d7688399b05-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:22:13.993793 kubelet[2169]: I0513 00:22:13.993689 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca48a3b-3bfb-469c-a055-9d7688399b05-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:22:13.995423 systemd[1]: var-lib-kubelet-pods-7ca48a3b\x2d3bfb\x2d469c\x2da055\x2d9d7688399b05-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:22:13.995680 kubelet[2169]: I0513 00:22:13.995653 2169 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca48a3b-3bfb-469c-a055-9d7688399b05-kube-api-access-w5mm6" (OuterVolumeSpecName: "kube-api-access-w5mm6") pod "7ca48a3b-3bfb-469c-a055-9d7688399b05" (UID: "7ca48a3b-3bfb-469c-a055-9d7688399b05"). InnerVolumeSpecName "kube-api-access-w5mm6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:22:13.997248 systemd[1]: var-lib-kubelet-pods-7ca48a3b\x2d3bfb\x2d469c\x2da055\x2d9d7688399b05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5mm6.mount: Deactivated successfully. May 13 00:22:14.090881 kubelet[2169]: I0513 00:22:14.090839 2169 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ca48a3b-3bfb-469c-a055-9d7688399b05-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:22:14.090881 kubelet[2169]: I0513 00:22:14.090871 2169 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:22:14.090881 kubelet[2169]: I0513 00:22:14.090881 2169 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:22:14.090881 kubelet[2169]: I0513 00:22:14.090889 2169 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w5mm6\" (UniqueName: \"kubernetes.io/projected/7ca48a3b-3bfb-469c-a055-9d7688399b05-kube-api-access-w5mm6\") on node \"localhost\" DevicePath \"\"" May 13 00:22:14.091108 kubelet[2169]: I0513 00:22:14.090898 2169 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ca48a3b-3bfb-469c-a055-9d7688399b05-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:22:14.091108 kubelet[2169]: I0513 00:22:14.090907 2169 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ca48a3b-3bfb-469c-a055-9d7688399b05-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:22:14.091108 kubelet[2169]: I0513 00:22:14.090916 2169 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ca48a3b-3bfb-469c-a055-9d7688399b05-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:22:14.903973 kubelet[2169]: I0513 00:22:14.903910 2169 topology_manager.go:215] "Topology Admit Handler" podUID="f5029a54-d4f0-465c-9238-659e29d1d0f5" podNamespace="kube-system" podName="cilium-whnbw" May 13 00:22:15.096542 kubelet[2169]: I0513 00:22:15.096501 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5029a54-d4f0-465c-9238-659e29d1d0f5-lib-modules\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.096969 kubelet[2169]: I0513 00:22:15.096930 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f5029a54-d4f0-465c-9238-659e29d1d0f5-cilium-ipsec-secrets\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.097089 kubelet[2169]: I0513 00:22:15.097073 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5029a54-d4f0-465c-9238-659e29d1d0f5-host-proc-sys-kernel\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.097183 kubelet[2169]: I0513 00:22:15.097166 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5029a54-d4f0-465c-9238-659e29d1d0f5-cilium-run\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.097260 kubelet[2169]: I0513 00:22:15.097247 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5029a54-d4f0-465c-9238-659e29d1d0f5-cni-path\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.097348 kubelet[2169]: I0513 00:22:15.097334 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5029a54-d4f0-465c-9238-659e29d1d0f5-host-proc-sys-net\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.097474 kubelet[2169]: I0513 00:22:15.097458 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5029a54-d4f0-465c-9238-659e29d1d0f5-cilium-config-path\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.097557 kubelet[2169]: I0513 00:22:15.097544 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5029a54-d4f0-465c-9238-659e29d1d0f5-cilium-cgroup\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.097640 kubelet[2169]: I0513 00:22:15.097627 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5029a54-d4f0-465c-9238-659e29d1d0f5-xtables-lock\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.097717 kubelet[2169]: I0513 00:22:15.097705 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5029a54-d4f0-465c-9238-659e29d1d0f5-bpf-maps\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.097809 kubelet[2169]: I0513 00:22:15.097795 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5029a54-d4f0-465c-9238-659e29d1d0f5-clustermesh-secrets\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.097882 kubelet[2169]: I0513 00:22:15.097868 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5029a54-d4f0-465c-9238-659e29d1d0f5-hubble-tls\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.097963 kubelet[2169]: I0513 00:22:15.097940 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzscz\" (UniqueName: \"kubernetes.io/projected/f5029a54-d4f0-465c-9238-659e29d1d0f5-kube-api-access-fzscz\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.098045 kubelet[2169]: I0513 00:22:15.098031 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5029a54-d4f0-465c-9238-659e29d1d0f5-hostproc\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.098125 kubelet[2169]: I0513 00:22:15.098112 2169 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5029a54-d4f0-465c-9238-659e29d1d0f5-etc-cni-netd\") pod \"cilium-whnbw\" (UID: \"f5029a54-d4f0-465c-9238-659e29d1d0f5\") " pod="kube-system/cilium-whnbw" May 13 00:22:15.507047 kubelet[2169]: E0513 00:22:15.507015 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:15.508090 env[1314]: time="2025-05-13T00:22:15.508033399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-whnbw,Uid:f5029a54-d4f0-465c-9238-659e29d1d0f5,Namespace:kube-system,Attempt:0,}" May 13 00:22:15.523530 env[1314]: time="2025-05-13T00:22:15.523434592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:15.523681 env[1314]: time="2025-05-13T00:22:15.523478913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:15.523681 env[1314]: time="2025-05-13T00:22:15.523530793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:15.523777 env[1314]: time="2025-05-13T00:22:15.523729876Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4 pid=4015 runtime=io.containerd.runc.v2 May 13 00:22:15.587434 env[1314]: time="2025-05-13T00:22:15.587394315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-whnbw,Uid:f5029a54-d4f0-465c-9238-659e29d1d0f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4\"" May 13 00:22:15.588175 kubelet[2169]: E0513 00:22:15.588135 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:15.590586 env[1314]: time="2025-05-13T00:22:15.590546315Z" level=info msg="CreateContainer within sandbox \"1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:22:15.599886 env[1314]: time="2025-05-13T00:22:15.599834992Z" level=info msg="CreateContainer within sandbox \"1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ad8626d3c0721b0053aae3ddda491fd2a12bedf114432992606977892b40d5d3\"" May 13 00:22:15.600601 env[1314]: time="2025-05-13T00:22:15.600568201Z" level=info msg="StartContainer for \"ad8626d3c0721b0053aae3ddda491fd2a12bedf114432992606977892b40d5d3\"" May 13 00:22:15.661696 env[1314]: time="2025-05-13T00:22:15.661641288Z" level=info msg="StartContainer for \"ad8626d3c0721b0053aae3ddda491fd2a12bedf114432992606977892b40d5d3\" returns successfully" May 13 00:22:15.685966 kubelet[2169]: E0513 00:22:15.685883 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:15.695765 env[1314]: time="2025-05-13T00:22:15.695713596Z" level=info msg="shim disconnected" id=ad8626d3c0721b0053aae3ddda491fd2a12bedf114432992606977892b40d5d3 May 13 00:22:15.695765 env[1314]: time="2025-05-13T00:22:15.695760356Z" level=warning msg="cleaning up after shim disconnected" id=ad8626d3c0721b0053aae3ddda491fd2a12bedf114432992606977892b40d5d3 namespace=k8s.io May 13 00:22:15.695765 env[1314]: time="2025-05-13T00:22:15.695769877Z" level=info msg="cleaning up dead shim" May 13 00:22:15.702239 env[1314]: time="2025-05-13T00:22:15.702169997Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:22:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4098 runtime=io.containerd.runc.v2\n" May 13 00:22:15.737192 kubelet[2169]: E0513 00:22:15.737148 2169 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:22:15.872248 kubelet[2169]: E0513 00:22:15.871503 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:15.874030 env[1314]: time="2025-05-13T00:22:15.873992755Z" level=info msg="CreateContainer within sandbox \"1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:22:15.903000 env[1314]: time="2025-05-13T00:22:15.902463473Z" level=info msg="CreateContainer within sandbox \"1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"28d5f037c8c4afda466c5cef5a3955f6260cdaf881e63039c91e200c87c4b1ab\"" May 13 00:22:15.904916 env[1314]: time="2025-05-13T00:22:15.904882423Z" level=info msg="StartContainer for \"28d5f037c8c4afda466c5cef5a3955f6260cdaf881e63039c91e200c87c4b1ab\"" May 13 00:22:15.949413 env[1314]: time="2025-05-13T00:22:15.948500531Z" level=info msg="StartContainer for \"28d5f037c8c4afda466c5cef5a3955f6260cdaf881e63039c91e200c87c4b1ab\" returns successfully" May 13 00:22:15.970866 env[1314]: time="2025-05-13T00:22:15.970815251Z" level=info msg="shim disconnected" id=28d5f037c8c4afda466c5cef5a3955f6260cdaf881e63039c91e200c87c4b1ab May 13 00:22:15.970866 env[1314]: time="2025-05-13T00:22:15.970860252Z" level=warning msg="cleaning up after shim disconnected" id=28d5f037c8c4afda466c5cef5a3955f6260cdaf881e63039c91e200c87c4b1ab namespace=k8s.io May 13 00:22:15.970866 env[1314]: time="2025-05-13T00:22:15.970870932Z" level=info msg="cleaning up dead shim" May 13 00:22:15.977527 env[1314]: time="2025-05-13T00:22:15.977495215Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:22:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4160 runtime=io.containerd.runc.v2\n" May 13 00:22:16.687761 kubelet[2169]: I0513 00:22:16.687714 2169 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca48a3b-3bfb-469c-a055-9d7688399b05" path="/var/lib/kubelet/pods/7ca48a3b-3bfb-469c-a055-9d7688399b05/volumes" May 13 00:22:16.873735 kubelet[2169]: E0513 00:22:16.873697 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:16.878129 env[1314]: time="2025-05-13T00:22:16.878073083Z" level=info msg="CreateContainer within sandbox \"1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:22:16.888800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount477214160.mount: Deactivated successfully. May 13 00:22:16.891378 env[1314]: time="2025-05-13T00:22:16.891313524Z" level=info msg="CreateContainer within sandbox \"1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"49453d9e7959d92963904c5e110609ba5d7f204141c9182b1975589bb968ba87\"" May 13 00:22:16.894075 env[1314]: time="2025-05-13T00:22:16.894019117Z" level=info msg="StartContainer for \"49453d9e7959d92963904c5e110609ba5d7f204141c9182b1975589bb968ba87\"" May 13 00:22:16.953258 env[1314]: time="2025-05-13T00:22:16.952957433Z" level=info msg="StartContainer for \"49453d9e7959d92963904c5e110609ba5d7f204141c9182b1975589bb968ba87\" returns successfully" May 13 00:22:16.974888 env[1314]: time="2025-05-13T00:22:16.974828179Z" level=info msg="shim disconnected" id=49453d9e7959d92963904c5e110609ba5d7f204141c9182b1975589bb968ba87 May 13 00:22:16.974888 env[1314]: time="2025-05-13T00:22:16.974871339Z" level=warning msg="cleaning up after shim disconnected" id=49453d9e7959d92963904c5e110609ba5d7f204141c9182b1975589bb968ba87 namespace=k8s.io May 13 00:22:16.974888 env[1314]: time="2025-05-13T00:22:16.974881299Z" level=info msg="cleaning up dead shim" May 13 00:22:16.982775 env[1314]: time="2025-05-13T00:22:16.982724715Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:22:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4218 runtime=io.containerd.runc.v2\n" May 13 00:22:17.204060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49453d9e7959d92963904c5e110609ba5d7f204141c9182b1975589bb968ba87-rootfs.mount: Deactivated successfully. May 13 00:22:17.879129 kubelet[2169]: E0513 00:22:17.878858 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:17.881843 env[1314]: time="2025-05-13T00:22:17.881720162Z" level=info msg="CreateContainer within sandbox \"1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:22:17.894582 env[1314]: time="2025-05-13T00:22:17.894545352Z" level=info msg="CreateContainer within sandbox \"1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e1dd6f81d1aca6e1533078fa6789b5a9c1a4ba786c3fcd61ec0e27eac60997d2\"" May 13 00:22:17.896078 env[1314]: time="2025-05-13T00:22:17.896047050Z" level=info msg="StartContainer for \"e1dd6f81d1aca6e1533078fa6789b5a9c1a4ba786c3fcd61ec0e27eac60997d2\"" May 13 00:22:17.943991 env[1314]: time="2025-05-13T00:22:17.943934452Z" level=info msg="StartContainer for \"e1dd6f81d1aca6e1533078fa6789b5a9c1a4ba786c3fcd61ec0e27eac60997d2\" returns successfully" May 13 00:22:17.960021 env[1314]: time="2025-05-13T00:22:17.959964281Z" level=info msg="shim disconnected" id=e1dd6f81d1aca6e1533078fa6789b5a9c1a4ba786c3fcd61ec0e27eac60997d2 May 13 00:22:17.960021 env[1314]: time="2025-05-13T00:22:17.960020161Z" level=warning msg="cleaning up after shim disconnected" id=e1dd6f81d1aca6e1533078fa6789b5a9c1a4ba786c3fcd61ec0e27eac60997d2 namespace=k8s.io May 13 00:22:17.960214 env[1314]: time="2025-05-13T00:22:17.960029882Z" level=info msg="cleaning up dead shim" May 13 00:22:17.966336 env[1314]: time="2025-05-13T00:22:17.966299755Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:22:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4275 runtime=io.containerd.runc.v2\n" May 13 00:22:18.204139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1dd6f81d1aca6e1533078fa6789b5a9c1a4ba786c3fcd61ec0e27eac60997d2-rootfs.mount: Deactivated successfully. May 13 00:22:18.882581 kubelet[2169]: E0513 00:22:18.882549 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:18.885126 env[1314]: time="2025-05-13T00:22:18.885076365Z" level=info msg="CreateContainer within sandbox \"1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:22:18.894877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114682700.mount: Deactivated successfully. May 13 00:22:18.895769 env[1314]: time="2025-05-13T00:22:18.895711725Z" level=info msg="CreateContainer within sandbox \"1db1ec9d76f724542016965d101bd090f4dfca4b40b53912c497c35dc828ccc4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7aca887ee19a7e8df12f7654d434ec5db7e6f7aac3ad6048a281f2e78f397250\"" May 13 00:22:18.896222 env[1314]: time="2025-05-13T00:22:18.896198211Z" level=info msg="StartContainer for \"7aca887ee19a7e8df12f7654d434ec5db7e6f7aac3ad6048a281f2e78f397250\"" May 13 00:22:18.955080 env[1314]: time="2025-05-13T00:22:18.954892958Z" level=info msg="StartContainer for \"7aca887ee19a7e8df12f7654d434ec5db7e6f7aac3ad6048a281f2e78f397250\" returns successfully" May 13 00:22:19.232396 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 13 00:22:19.887282 kubelet[2169]: E0513 00:22:19.887232 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:21.508730 kubelet[2169]: E0513 00:22:21.508652 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:21.686620 kubelet[2169]: E0513 00:22:21.686573 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:21.994722 systemd-networkd[1091]: lxc_health: Link UP May 13 00:22:22.004109 systemd-networkd[1091]: lxc_health: Gained carrier May 13 00:22:22.004674 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:22:23.201501 systemd-networkd[1091]: lxc_health: Gained IPv6LL May 13 00:22:23.439811 systemd[1]: run-containerd-runc-k8s.io-7aca887ee19a7e8df12f7654d434ec5db7e6f7aac3ad6048a281f2e78f397250-runc.tpekUe.mount: Deactivated successfully. May 13 00:22:23.509023 kubelet[2169]: E0513 00:22:23.508917 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:23.526236 kubelet[2169]: I0513 00:22:23.526153 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-whnbw" podStartSLOduration=9.526138114 podStartE2EDuration="9.526138114s" podCreationTimestamp="2025-05-13 00:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:19.903276751 +0000 UTC m=+89.320922729" watchObservedRunningTime="2025-05-13 00:22:23.526138114 +0000 UTC m=+92.943784092" May 13 00:22:23.893550 kubelet[2169]: E0513 00:22:23.893513 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:24.895380 kubelet[2169]: E0513 00:22:24.895335 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:25.552639 systemd[1]: run-containerd-runc-k8s.io-7aca887ee19a7e8df12f7654d434ec5db7e6f7aac3ad6048a281f2e78f397250-runc.Ian7It.mount: Deactivated successfully. May 13 00:22:27.669666 systemd[1]: run-containerd-runc-k8s.io-7aca887ee19a7e8df12f7654d434ec5db7e6f7aac3ad6048a281f2e78f397250-runc.cxh1tr.mount: Deactivated successfully. May 13 00:22:27.720133 sshd[3982]: pam_unix(sshd:session): session closed for user core May 13 00:22:27.722972 systemd[1]: sshd@24-10.0.0.41:22-10.0.0.1:40740.service: Deactivated successfully. May 13 00:22:27.723720 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:22:27.724664 systemd-logind[1299]: Session 25 logged out. Waiting for processes to exit. May 13 00:22:27.725421 systemd-logind[1299]: Removed session 25.