May 15 10:21:47.732050 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 10:21:47.732069 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 09:09:56 -00 2025 May 15 10:21:47.732077 kernel: efi: EFI v2.70 by EDK II May 15 10:21:47.732082 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 15 10:21:47.732087 kernel: random: crng init done May 15 10:21:47.732093 kernel: ACPI: Early table checksum verification disabled May 15 10:21:47.732099 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 15 10:21:47.732105 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 10:21:47.732111 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:21:47.732116 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:21:47.732121 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:21:47.732126 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:21:47.732132 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:21:47.732137 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:21:47.732145 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:21:47.732151 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:21:47.732156 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:21:47.732162 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 10:21:47.732167 kernel: NUMA: Failed to initialise from firmware May 15 10:21:47.732173 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:21:47.732179 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 15 10:21:47.732184 kernel: Zone ranges: May 15 10:21:47.732190 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:21:47.732196 kernel: DMA32 empty May 15 10:21:47.732202 kernel: Normal empty May 15 10:21:47.732207 kernel: Movable zone start for each node May 15 10:21:47.732213 kernel: Early memory node ranges May 15 10:21:47.732218 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 15 10:21:47.732224 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 15 10:21:47.732230 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 15 10:21:47.732235 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 15 10:21:47.732241 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 15 10:21:47.732246 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 15 10:21:47.732252 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 15 10:21:47.732257 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:21:47.732264 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 10:21:47.732269 kernel: psci: probing for conduit method from ACPI. May 15 10:21:47.732275 kernel: psci: PSCIv1.1 detected in firmware. May 15 10:21:47.732280 kernel: psci: Using standard PSCI v0.2 function IDs May 15 10:21:47.732286 kernel: psci: Trusted OS migration not required May 15 10:21:47.732294 kernel: psci: SMC Calling Convention v1.1 May 15 10:21:47.732300 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 10:21:47.732308 kernel: ACPI: SRAT not present May 15 10:21:47.732314 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 15 10:21:47.732320 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 15 10:21:47.732326 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 10:21:47.732332 kernel: Detected PIPT I-cache on CPU0 May 15 10:21:47.732338 kernel: CPU features: detected: GIC system register CPU interface May 15 10:21:47.732344 kernel: CPU features: detected: Hardware dirty bit management May 15 10:21:47.732350 kernel: CPU features: detected: Spectre-v4 May 15 10:21:47.732356 kernel: CPU features: detected: Spectre-BHB May 15 10:21:47.732363 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 10:21:47.732369 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 10:21:47.732375 kernel: CPU features: detected: ARM erratum 1418040 May 15 10:21:47.732381 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 10:21:47.732387 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 10:21:47.732393 kernel: Policy zone: DMA May 15 10:21:47.732400 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=aa29d2e9841b6b978238db9eff73afa5af149616ae25608914babb265d82dda7 May 15 10:21:47.732406 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 10:21:47.732412 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 10:21:47.732418 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 10:21:47.732424 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 10:21:47.732432 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) May 15 10:21:47.732438 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 10:21:47.732444 kernel: trace event string verifier disabled May 15 10:21:47.732450 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 10:21:47.732456 kernel: rcu: RCU event tracing is enabled. May 15 10:21:47.732462 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 10:21:47.732468 kernel: Trampoline variant of Tasks RCU enabled. May 15 10:21:47.732474 kernel: Tracing variant of Tasks RCU enabled. May 15 10:21:47.732480 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 10:21:47.732486 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 10:21:47.732492 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 10:21:47.732499 kernel: GICv3: 256 SPIs implemented May 15 10:21:47.732505 kernel: GICv3: 0 Extended SPIs implemented May 15 10:21:47.732511 kernel: GICv3: Distributor has no Range Selector support May 15 10:21:47.732517 kernel: Root IRQ handler: gic_handle_irq May 15 10:21:47.732523 kernel: GICv3: 16 PPIs implemented May 15 10:21:47.732529 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 10:21:47.732535 kernel: ACPI: SRAT not present May 15 10:21:47.732541 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 10:21:47.732547 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 15 10:21:47.732553 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 15 10:21:47.732559 kernel: GICv3: using LPI property table @0x00000000400d0000 May 15 10:21:47.732565 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 15 10:21:47.732572 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:21:47.732578 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 10:21:47.732585 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 10:21:47.732591 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 10:21:47.732597 kernel: arm-pv: using stolen time PV May 15 10:21:47.732603 kernel: Console: colour dummy device 80x25 May 15 10:21:47.732609 kernel: ACPI: Core revision 20210730 May 15 10:21:47.732615 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 10:21:47.732622 kernel: pid_max: default: 32768 minimum: 301 May 15 10:21:47.732628 kernel: LSM: Security Framework initializing May 15 10:21:47.732635 kernel: SELinux: Initializing. May 15 10:21:47.732641 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:21:47.732657 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:21:47.732663 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 10:21:47.732678 kernel: rcu: Hierarchical SRCU implementation. May 15 10:21:47.732684 kernel: Platform MSI: ITS@0x8080000 domain created May 15 10:21:47.732690 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 10:21:47.732696 kernel: Remapping and enabling EFI services. May 15 10:21:47.732702 kernel: smp: Bringing up secondary CPUs ... May 15 10:21:47.732710 kernel: Detected PIPT I-cache on CPU1 May 15 10:21:47.732716 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 10:21:47.732723 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 15 10:21:47.732729 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:21:47.732735 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 10:21:47.732741 kernel: Detected PIPT I-cache on CPU2 May 15 10:21:47.732747 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 10:21:47.732753 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 15 10:21:47.732759 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:21:47.732765 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 10:21:47.732773 kernel: Detected PIPT I-cache on CPU3 May 15 10:21:47.732780 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 10:21:47.732786 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 15 10:21:47.732792 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:21:47.732802 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 10:21:47.732810 kernel: smp: Brought up 1 node, 4 CPUs May 15 10:21:47.732816 kernel: SMP: Total of 4 processors activated. May 15 10:21:47.732823 kernel: CPU features: detected: 32-bit EL0 Support May 15 10:21:47.735581 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 10:21:47.735594 kernel: CPU features: detected: Common not Private translations May 15 10:21:47.735601 kernel: CPU features: detected: CRC32 instructions May 15 10:21:47.735608 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 10:21:47.735619 kernel: CPU features: detected: LSE atomic instructions May 15 10:21:47.735626 kernel: CPU features: detected: Privileged Access Never May 15 10:21:47.735632 kernel: CPU features: detected: RAS Extension Support May 15 10:21:47.735639 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 10:21:47.735659 kernel: CPU: All CPU(s) started at EL1 May 15 10:21:47.735688 kernel: alternatives: patching kernel code May 15 10:21:47.735695 kernel: devtmpfs: initialized May 15 10:21:47.735702 kernel: KASLR enabled May 15 10:21:47.735709 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 10:21:47.735715 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 10:21:47.735722 kernel: pinctrl core: initialized pinctrl subsystem May 15 10:21:47.735728 kernel: SMBIOS 3.0.0 present. May 15 10:21:47.735735 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 15 10:21:47.735742 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 10:21:47.735757 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 10:21:47.735764 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 10:21:47.735771 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 10:21:47.735778 kernel: audit: initializing netlink subsys (disabled) May 15 10:21:47.735785 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 May 15 10:21:47.735792 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 10:21:47.735798 kernel: cpuidle: using governor menu May 15 10:21:47.735805 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 10:21:47.735811 kernel: ASID allocator initialised with 32768 entries May 15 10:21:47.735842 kernel: ACPI: bus type PCI registered May 15 10:21:47.735849 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 10:21:47.735864 kernel: Serial: AMBA PL011 UART driver May 15 10:21:47.735871 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 10:21:47.735877 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 15 10:21:47.735884 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 10:21:47.735890 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 15 10:21:47.735897 kernel: cryptd: max_cpu_qlen set to 1000 May 15 10:21:47.735903 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 10:21:47.735912 kernel: ACPI: Added _OSI(Module Device) May 15 10:21:47.735919 kernel: ACPI: Added _OSI(Processor Device) May 15 10:21:47.735925 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 10:21:47.735931 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 10:21:47.735938 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 10:21:47.735944 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 10:21:47.735951 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 10:21:47.735957 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 10:21:47.735964 kernel: ACPI: Interpreter enabled May 15 10:21:47.735972 kernel: ACPI: Using GIC for interrupt routing May 15 10:21:47.735978 kernel: ACPI: MCFG table detected, 1 entries May 15 10:21:47.735985 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 10:21:47.735991 kernel: printk: console [ttyAMA0] enabled May 15 10:21:47.735998 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 10:21:47.736123 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 10:21:47.736241 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 10:21:47.736309 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 10:21:47.736367 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 10:21:47.736423 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 10:21:47.736432 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 10:21:47.736439 kernel: PCI host bridge to bus 0000:00 May 15 10:21:47.736503 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 10:21:47.736556 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 10:21:47.736608 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 10:21:47.736867 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 10:21:47.736958 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 10:21:47.737032 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 10:21:47.737093 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 10:21:47.737152 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 10:21:47.737211 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 10:21:47.737275 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 10:21:47.737334 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 10:21:47.737393 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 10:21:47.737446 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 10:21:47.737498 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 10:21:47.737550 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 10:21:47.737559 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 10:21:47.737567 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 10:21:47.737575 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 10:21:47.737582 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 10:21:47.737588 kernel: iommu: Default domain type: Translated May 15 10:21:47.737595 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 10:21:47.737601 kernel: vgaarb: loaded May 15 10:21:47.737608 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 10:21:47.737614 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 10:21:47.737633 kernel: PTP clock support registered May 15 10:21:47.737640 kernel: Registered efivars operations May 15 10:21:47.737660 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 10:21:47.737698 kernel: VFS: Disk quotas dquot_6.6.0 May 15 10:21:47.737706 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 10:21:47.737713 kernel: pnp: PnP ACPI init May 15 10:21:47.737834 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 10:21:47.737846 kernel: pnp: PnP ACPI: found 1 devices May 15 10:21:47.737853 kernel: NET: Registered PF_INET protocol family May 15 10:21:47.737860 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 10:21:47.737870 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 10:21:47.737878 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 10:21:47.737885 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 10:21:47.737892 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 10:21:47.737898 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 10:21:47.737905 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:21:47.737912 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:21:47.737919 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 10:21:47.737925 kernel: PCI: CLS 0 bytes, default 64 May 15 10:21:47.737933 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 10:21:47.737940 kernel: kvm [1]: HYP mode not available May 15 10:21:47.737946 kernel: Initialise system trusted keyrings May 15 10:21:47.737953 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 10:21:47.737959 kernel: Key type asymmetric registered May 15 10:21:47.737966 kernel: Asymmetric key parser 'x509' registered May 15 10:21:47.737973 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 10:21:47.737979 kernel: io scheduler mq-deadline registered May 15 10:21:47.737986 kernel: io scheduler kyber registered May 15 10:21:47.737993 kernel: io scheduler bfq registered May 15 10:21:47.738001 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 10:21:47.738008 kernel: ACPI: button: Power Button [PWRB] May 15 10:21:47.738016 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 10:21:47.738080 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 10:21:47.738089 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 10:21:47.738096 kernel: thunder_xcv, ver 1.0 May 15 10:21:47.738103 kernel: thunder_bgx, ver 1.0 May 15 10:21:47.738109 kernel: nicpf, ver 1.0 May 15 10:21:47.738117 kernel: nicvf, ver 1.0 May 15 10:21:47.738181 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 10:21:47.738235 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T10:21:47 UTC (1747304507) May 15 10:21:47.738244 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 10:21:47.738250 kernel: NET: Registered PF_INET6 protocol family May 15 10:21:47.738257 kernel: Segment Routing with IPv6 May 15 10:21:47.738264 kernel: In-situ OAM (IOAM) with IPv6 May 15 10:21:47.738270 kernel: NET: Registered PF_PACKET protocol family May 15 10:21:47.738279 kernel: Key type dns_resolver registered May 15 10:21:47.738285 kernel: registered taskstats version 1 May 15 10:21:47.738292 kernel: Loading compiled-in X.509 certificates May 15 10:21:47.738299 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 3679cbfb4d4756a2ddc177f0eaedea33fb5fdf2e' May 15 10:21:47.738305 kernel: Key type .fscrypt registered May 15 10:21:47.738312 kernel: Key type fscrypt-provisioning registered May 15 10:21:47.738319 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 10:21:47.738325 kernel: ima: Allocated hash algorithm: sha1 May 15 10:21:47.738332 kernel: ima: No architecture policies found May 15 10:21:47.738339 kernel: clk: Disabling unused clocks May 15 10:21:47.738346 kernel: Freeing unused kernel memory: 36416K May 15 10:21:47.738352 kernel: Run /init as init process May 15 10:21:47.738359 kernel: with arguments: May 15 10:21:47.738366 kernel: /init May 15 10:21:47.738372 kernel: with environment: May 15 10:21:47.738379 kernel: HOME=/ May 15 10:21:47.738385 kernel: TERM=linux May 15 10:21:47.738391 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 10:21:47.738401 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:21:47.738410 systemd[1]: Detected virtualization kvm. May 15 10:21:47.738417 systemd[1]: Detected architecture arm64. May 15 10:21:47.738424 systemd[1]: Running in initrd. May 15 10:21:47.738431 systemd[1]: No hostname configured, using default hostname. May 15 10:21:47.738438 systemd[1]: Hostname set to . May 15 10:21:47.738446 systemd[1]: Initializing machine ID from VM UUID. May 15 10:21:47.738454 systemd[1]: Queued start job for default target initrd.target. May 15 10:21:47.738462 systemd[1]: Started systemd-ask-password-console.path. May 15 10:21:47.738468 systemd[1]: Reached target cryptsetup.target. May 15 10:21:47.738475 systemd[1]: Reached target paths.target. May 15 10:21:47.738483 systemd[1]: Reached target slices.target. May 15 10:21:47.738490 systemd[1]: Reached target swap.target. May 15 10:21:47.738498 systemd[1]: Reached target timers.target. May 15 10:21:47.738505 systemd[1]: Listening on iscsid.socket. May 15 10:21:47.738513 systemd[1]: Listening on iscsiuio.socket. May 15 10:21:47.738520 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:21:47.738527 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:21:47.738535 systemd[1]: Listening on systemd-journald.socket. May 15 10:21:47.738542 systemd[1]: Listening on systemd-networkd.socket. May 15 10:21:47.738549 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:21:47.738556 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:21:47.738563 systemd[1]: Reached target sockets.target. May 15 10:21:47.738571 systemd[1]: Starting kmod-static-nodes.service... May 15 10:21:47.738578 systemd[1]: Finished network-cleanup.service. May 15 10:21:47.738585 systemd[1]: Starting systemd-fsck-usr.service... May 15 10:21:47.738592 systemd[1]: Starting systemd-journald.service... May 15 10:21:47.738599 systemd[1]: Starting systemd-modules-load.service... May 15 10:21:47.738606 systemd[1]: Starting systemd-resolved.service... May 15 10:21:47.738613 systemd[1]: Starting systemd-vconsole-setup.service... May 15 10:21:47.738620 systemd[1]: Finished kmod-static-nodes.service. May 15 10:21:47.738627 systemd[1]: Finished systemd-fsck-usr.service. May 15 10:21:47.738636 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:21:47.738643 systemd[1]: Finished systemd-vconsole-setup.service. May 15 10:21:47.738660 kernel: audit: type=1130 audit(1747304507.732:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.738677 systemd[1]: Starting dracut-cmdline-ask.service... May 15 10:21:47.738690 systemd-journald[289]: Journal started May 15 10:21:47.738734 systemd-journald[289]: Runtime Journal (/run/log/journal/e735df9c4d094421b436398dfde0b45a) is 6.0M, max 48.7M, 42.6M free. May 15 10:21:47.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.735837 systemd-modules-load[290]: Inserted module 'overlay' May 15 10:21:47.740587 systemd[1]: Started systemd-journald.service. May 15 10:21:47.741542 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:21:47.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.747514 kernel: audit: type=1130 audit(1747304507.741:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.747537 kernel: audit: type=1130 audit(1747304507.742:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.759375 systemd[1]: Finished dracut-cmdline-ask.service. May 15 10:21:47.760711 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 10:21:47.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.760911 systemd[1]: Starting dracut-cmdline.service... May 15 10:21:47.765024 kernel: audit: type=1130 audit(1747304507.760:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.765042 kernel: Bridge firewalling registered May 15 10:21:47.764463 systemd-resolved[291]: Positive Trust Anchors: May 15 10:21:47.764469 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:21:47.764496 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:21:47.765004 systemd-modules-load[290]: Inserted module 'br_netfilter' May 15 10:21:47.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.768700 systemd-resolved[291]: Defaulting to hostname 'linux'. May 15 10:21:47.769437 systemd[1]: Started systemd-resolved.service. May 15 10:21:47.777813 kernel: audit: type=1130 audit(1747304507.771:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.777833 dracut-cmdline[307]: dracut-dracut-053 May 15 10:21:47.775057 systemd[1]: Reached target nss-lookup.target. May 15 10:21:47.779691 kernel: SCSI subsystem initialized May 15 10:21:47.779865 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=aa29d2e9841b6b978238db9eff73afa5af149616ae25608914babb265d82dda7 May 15 10:21:47.788933 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 10:21:47.788969 kernel: device-mapper: uevent: version 1.0.3 May 15 10:21:47.788979 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 10:21:47.791002 systemd-modules-load[290]: Inserted module 'dm_multipath' May 15 10:21:47.791815 systemd[1]: Finished systemd-modules-load.service. May 15 10:21:47.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.795434 systemd[1]: Starting systemd-sysctl.service... May 15 10:21:47.796692 kernel: audit: type=1130 audit(1747304507.791:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.803480 systemd[1]: Finished systemd-sysctl.service. May 15 10:21:47.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.807711 kernel: audit: type=1130 audit(1747304507.804:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.843697 kernel: Loading iSCSI transport class v2.0-870. May 15 10:21:47.855709 kernel: iscsi: registered transport (tcp) May 15 10:21:47.870697 kernel: iscsi: registered transport (qla4xxx) May 15 10:21:47.870728 kernel: QLogic iSCSI HBA Driver May 15 10:21:47.905681 systemd[1]: Finished dracut-cmdline.service. May 15 10:21:47.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.907161 systemd[1]: Starting dracut-pre-udev.service... May 15 10:21:47.909782 kernel: audit: type=1130 audit(1747304507.905:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:47.949700 kernel: raid6: neonx8 gen() 13429 MB/s May 15 10:21:47.966682 kernel: raid6: neonx8 xor() 10757 MB/s May 15 10:21:47.983688 kernel: raid6: neonx4 gen() 13475 MB/s May 15 10:21:48.000683 kernel: raid6: neonx4 xor() 11091 MB/s May 15 10:21:48.017694 kernel: raid6: neonx2 gen() 12679 MB/s May 15 10:21:48.034686 kernel: raid6: neonx2 xor() 10412 MB/s May 15 10:21:48.051683 kernel: raid6: neonx1 gen() 10542 MB/s May 15 10:21:48.068688 kernel: raid6: neonx1 xor() 8767 MB/s May 15 10:21:48.085685 kernel: raid6: int64x8 gen() 6268 MB/s May 15 10:21:48.102748 kernel: raid6: int64x8 xor() 3500 MB/s May 15 10:21:48.119684 kernel: raid6: int64x4 gen() 7179 MB/s May 15 10:21:48.136691 kernel: raid6: int64x4 xor() 3829 MB/s May 15 10:21:48.153692 kernel: raid6: int64x2 gen() 6108 MB/s May 15 10:21:48.170693 kernel: raid6: int64x2 xor() 3317 MB/s May 15 10:21:48.187695 kernel: raid6: int64x1 gen() 4924 MB/s May 15 10:21:48.204836 kernel: raid6: int64x1 xor() 2592 MB/s May 15 10:21:48.204859 kernel: raid6: using algorithm neonx4 gen() 13475 MB/s May 15 10:21:48.204868 kernel: raid6: .... xor() 11091 MB/s, rmw enabled May 15 10:21:48.205903 kernel: raid6: using neon recovery algorithm May 15 10:21:48.217958 kernel: xor: measuring software checksum speed May 15 10:21:48.218683 kernel: 8regs : 1697 MB/sec May 15 10:21:48.219973 kernel: 32regs : 18411 MB/sec May 15 10:21:48.219990 kernel: arm64_neon : 27813 MB/sec May 15 10:21:48.219998 kernel: xor: using function: arm64_neon (27813 MB/sec) May 15 10:21:48.280711 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 15 10:21:48.291806 systemd[1]: Finished dracut-pre-udev.service. May 15 10:21:48.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:48.294000 audit: BPF prog-id=7 op=LOAD May 15 10:21:48.296000 audit: BPF prog-id=8 op=LOAD May 15 10:21:48.298144 kernel: audit: type=1130 audit(1747304508.291:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:48.297606 systemd[1]: Starting systemd-udevd.service... May 15 10:21:48.319979 systemd-udevd[491]: Using default interface naming scheme 'v252'. May 15 10:21:48.324818 systemd[1]: Started systemd-udevd.service. May 15 10:21:48.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:48.326255 systemd[1]: Starting dracut-pre-trigger.service... May 15 10:21:48.338024 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation May 15 10:21:48.370293 systemd[1]: Finished dracut-pre-trigger.service. May 15 10:21:48.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:48.371720 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:21:48.404728 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:21:48.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:48.437709 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 10:21:48.443364 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 10:21:48.443379 kernel: GPT:9289727 != 19775487 May 15 10:21:48.443388 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 10:21:48.443397 kernel: GPT:9289727 != 19775487 May 15 10:21:48.443404 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 10:21:48.443418 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:21:48.457879 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 10:21:48.463696 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (546) May 15 10:21:48.467042 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 10:21:48.467942 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 10:21:48.472078 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 10:21:48.473581 systemd[1]: Starting disk-uuid.service... May 15 10:21:48.478052 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:21:48.479847 disk-uuid[564]: Primary Header is updated. May 15 10:21:48.479847 disk-uuid[564]: Secondary Entries is updated. May 15 10:21:48.479847 disk-uuid[564]: Secondary Header is updated. May 15 10:21:48.486694 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:21:48.488685 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:21:48.491700 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:21:49.491687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:21:49.491737 disk-uuid[565]: The operation has completed successfully. May 15 10:21:49.513500 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 10:21:49.514481 systemd[1]: Finished disk-uuid.service. May 15 10:21:49.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.516529 systemd[1]: Starting verity-setup.service... May 15 10:21:49.530695 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 10:21:49.551831 systemd[1]: Found device dev-mapper-usr.device. May 15 10:21:49.554691 systemd[1]: Mounting sysusr-usr.mount... May 15 10:21:49.556382 systemd[1]: Finished verity-setup.service. May 15 10:21:49.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.602689 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 10:21:49.602835 systemd[1]: Mounted sysusr-usr.mount. May 15 10:21:49.603475 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 10:21:49.604103 systemd[1]: Starting ignition-setup.service... May 15 10:21:49.606033 systemd[1]: Starting parse-ip-for-networkd.service... May 15 10:21:49.613167 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 10:21:49.613267 kernel: BTRFS info (device vda6): using free space tree May 15 10:21:49.613293 kernel: BTRFS info (device vda6): has skinny extents May 15 10:21:49.620291 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 10:21:49.626331 systemd[1]: Finished ignition-setup.service. May 15 10:21:49.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.627577 systemd[1]: Starting ignition-fetch-offline.service... May 15 10:21:49.687191 systemd[1]: Finished parse-ip-for-networkd.service. May 15 10:21:49.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.687000 audit: BPF prog-id=9 op=LOAD May 15 10:21:49.689291 systemd[1]: Starting systemd-networkd.service... May 15 10:21:49.701070 ignition[651]: Ignition 2.14.0 May 15 10:21:49.701080 ignition[651]: Stage: fetch-offline May 15 10:21:49.701119 ignition[651]: no configs at "/usr/lib/ignition/base.d" May 15 10:21:49.701129 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:21:49.701250 ignition[651]: parsed url from cmdline: "" May 15 10:21:49.701253 ignition[651]: no config URL provided May 15 10:21:49.701257 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" May 15 10:21:49.701265 ignition[651]: no config at "/usr/lib/ignition/user.ign" May 15 10:21:49.701282 ignition[651]: op(1): [started] loading QEMU firmware config module May 15 10:21:49.701286 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 10:21:49.708726 ignition[651]: op(1): [finished] loading QEMU firmware config module May 15 10:21:49.711247 systemd-networkd[742]: lo: Link UP May 15 10:21:49.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.711261 systemd-networkd[742]: lo: Gained carrier May 15 10:21:49.711595 systemd-networkd[742]: Enumeration completed May 15 10:21:49.711791 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:21:49.712024 systemd[1]: Started systemd-networkd.service. May 15 10:21:49.713081 systemd[1]: Reached target network.target. May 15 10:21:49.713137 systemd-networkd[742]: eth0: Link UP May 15 10:21:49.713141 systemd-networkd[742]: eth0: Gained carrier May 15 10:21:49.714519 systemd[1]: Starting iscsiuio.service... May 15 10:21:49.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.723553 systemd[1]: Started iscsiuio.service. May 15 10:21:49.725030 systemd[1]: Starting iscsid.service... May 15 10:21:49.728576 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 10:21:49.728576 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 10:21:49.728576 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 10:21:49.728576 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 10:21:49.728576 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 10:21:49.728576 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 10:21:49.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.731415 systemd[1]: Started iscsid.service. May 15 10:21:49.736222 systemd[1]: Starting dracut-initqueue.service... May 15 10:21:49.738035 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:21:49.746071 systemd[1]: Finished dracut-initqueue.service. May 15 10:21:49.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.746958 systemd[1]: Reached target remote-fs-pre.target. May 15 10:21:49.748233 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:21:49.749636 systemd[1]: Reached target remote-fs.target. May 15 10:21:49.751784 systemd[1]: Starting dracut-pre-mount.service... May 15 10:21:49.759131 systemd[1]: Finished dracut-pre-mount.service. May 15 10:21:49.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.770494 ignition[651]: parsing config with SHA512: 90201c6ea6dd61e89db0247e3f9b8b3587870d22aef58c3621d5133f9631002ac491490bc912f5af807db7c5dd41070fda8722efad969071987056970ac22cef May 15 10:21:49.777390 unknown[651]: fetched base config from "system" May 15 10:21:49.777400 unknown[651]: fetched user config from "qemu" May 15 10:21:49.777858 ignition[651]: fetch-offline: fetch-offline passed May 15 10:21:49.777910 ignition[651]: Ignition finished successfully May 15 10:21:49.780472 systemd[1]: Finished ignition-fetch-offline.service. May 15 10:21:49.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.781411 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 10:21:49.782173 systemd[1]: Starting ignition-kargs.service... May 15 10:21:49.790839 ignition[763]: Ignition 2.14.0 May 15 10:21:49.790849 ignition[763]: Stage: kargs May 15 10:21:49.790938 ignition[763]: no configs at "/usr/lib/ignition/base.d" May 15 10:21:49.790947 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:21:49.791830 ignition[763]: kargs: kargs passed May 15 10:21:49.791871 ignition[763]: Ignition finished successfully May 15 10:21:49.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.794185 systemd[1]: Finished ignition-kargs.service. May 15 10:21:49.795888 systemd[1]: Starting ignition-disks.service... May 15 10:21:49.802196 ignition[769]: Ignition 2.14.0 May 15 10:21:49.802211 ignition[769]: Stage: disks May 15 10:21:49.802297 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 15 10:21:49.802306 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:21:49.804566 systemd[1]: Finished ignition-disks.service. May 15 10:21:49.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.803452 ignition[769]: disks: disks passed May 15 10:21:49.805940 systemd[1]: Reached target initrd-root-device.target. May 15 10:21:49.803494 ignition[769]: Ignition finished successfully May 15 10:21:49.806925 systemd[1]: Reached target local-fs-pre.target. May 15 10:21:49.807814 systemd[1]: Reached target local-fs.target. May 15 10:21:49.808825 systemd[1]: Reached target sysinit.target. May 15 10:21:49.809797 systemd[1]: Reached target basic.target. May 15 10:21:49.811526 systemd[1]: Starting systemd-fsck-root.service... May 15 10:21:49.822070 systemd-fsck[777]: ROOT: clean, 623/553520 files, 56022/553472 blocks May 15 10:21:49.825035 systemd[1]: Finished systemd-fsck-root.service. May 15 10:21:49.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.826468 systemd[1]: Mounting sysroot.mount... May 15 10:21:49.833696 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 10:21:49.833775 systemd[1]: Mounted sysroot.mount. May 15 10:21:49.834324 systemd[1]: Reached target initrd-root-fs.target. May 15 10:21:49.836451 systemd[1]: Mounting sysroot-usr.mount... May 15 10:21:49.837189 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 10:21:49.837223 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 10:21:49.837245 systemd[1]: Reached target ignition-diskful.target. May 15 10:21:49.839035 systemd[1]: Mounted sysroot-usr.mount. May 15 10:21:49.841175 systemd[1]: Starting initrd-setup-root.service... May 15 10:21:49.845246 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory May 15 10:21:49.849362 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory May 15 10:21:49.853219 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory May 15 10:21:49.856827 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory May 15 10:21:49.883010 systemd[1]: Finished initrd-setup-root.service. May 15 10:21:49.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.884340 systemd[1]: Starting ignition-mount.service... May 15 10:21:49.885512 systemd[1]: Starting sysroot-boot.service... May 15 10:21:49.889767 bash[828]: umount: /sysroot/usr/share/oem: not mounted. May 15 10:21:49.897437 ignition[830]: INFO : Ignition 2.14.0 May 15 10:21:49.897437 ignition[830]: INFO : Stage: mount May 15 10:21:49.897437 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:21:49.897437 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:21:49.900142 ignition[830]: INFO : mount: mount passed May 15 10:21:49.900142 ignition[830]: INFO : Ignition finished successfully May 15 10:21:49.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.900816 systemd[1]: Finished ignition-mount.service. May 15 10:21:49.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:49.902930 systemd[1]: Finished sysroot-boot.service. May 15 10:21:50.563143 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 10:21:50.569685 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (838) May 15 10:21:50.572159 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 10:21:50.572176 kernel: BTRFS info (device vda6): using free space tree May 15 10:21:50.572186 kernel: BTRFS info (device vda6): has skinny extents May 15 10:21:50.575025 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 10:21:50.576317 systemd[1]: Starting ignition-files.service... May 15 10:21:50.589624 ignition[858]: INFO : Ignition 2.14.0 May 15 10:21:50.589624 ignition[858]: INFO : Stage: files May 15 10:21:50.590840 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:21:50.590840 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:21:50.590840 ignition[858]: DEBUG : files: compiled without relabeling support, skipping May 15 10:21:50.594367 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 10:21:50.594367 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 10:21:50.596968 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 10:21:50.597956 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 10:21:50.597956 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 10:21:50.597680 unknown[858]: wrote ssh authorized keys file for user: core May 15 10:21:50.600750 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 10:21:50.600750 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 10:21:50.721485 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 10:21:50.859632 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 10:21:50.861027 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 10:21:50.861027 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 10:21:51.047810 systemd-networkd[742]: eth0: Gained IPv6LL May 15 10:21:51.246043 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 10:21:51.343670 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 10:21:51.344995 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 15 10:21:51.602686 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 10:21:51.976249 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 10:21:51.976249 ignition[858]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 10:21:51.979028 ignition[858]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:21:51.979028 ignition[858]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:21:51.979028 ignition[858]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 10:21:51.979028 ignition[858]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 10:21:51.979028 ignition[858]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:21:51.979028 ignition[858]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:21:51.979028 ignition[858]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 10:21:51.979028 ignition[858]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 15 10:21:51.979028 ignition[858]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 15 10:21:51.979028 ignition[858]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 15 10:21:51.979028 ignition[858]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:21:52.015015 ignition[858]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:21:52.016801 ignition[858]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 15 10:21:52.016801 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 10:21:52.016801 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 10:21:52.016801 ignition[858]: INFO : files: files passed May 15 10:21:52.016801 ignition[858]: INFO : Ignition finished successfully May 15 10:21:52.031085 kernel: kauditd_printk_skb: 23 callbacks suppressed May 15 10:21:52.031107 kernel: audit: type=1130 audit(1747304512.017:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.031118 kernel: audit: type=1130 audit(1747304512.025:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.031128 kernel: audit: type=1131 audit(1747304512.025:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.017145 systemd[1]: Finished ignition-files.service. May 15 10:21:52.034706 kernel: audit: type=1130 audit(1747304512.031:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.019435 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 10:21:52.020342 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 10:21:52.038042 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 15 10:21:52.020962 systemd[1]: Starting ignition-quench.service... May 15 10:21:52.039925 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 10:21:52.025161 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 10:21:52.025242 systemd[1]: Finished ignition-quench.service. May 15 10:21:52.030592 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 10:21:52.031834 systemd[1]: Reached target ignition-complete.target. May 15 10:21:52.035938 systemd[1]: Starting initrd-parse-etc.service... May 15 10:21:52.047366 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 10:21:52.047447 systemd[1]: Finished initrd-parse-etc.service. May 15 10:21:52.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.048802 systemd[1]: Reached target initrd-fs.target. May 15 10:21:52.055122 kernel: audit: type=1130 audit(1747304512.048:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.055139 kernel: audit: type=1131 audit(1747304512.048:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.054483 systemd[1]: Reached target initrd.target. May 15 10:21:52.055642 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 10:21:52.056297 systemd[1]: Starting dracut-pre-pivot.service... May 15 10:21:52.065889 systemd[1]: Finished dracut-pre-pivot.service. May 15 10:21:52.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.067203 systemd[1]: Starting initrd-cleanup.service... May 15 10:21:52.070482 kernel: audit: type=1130 audit(1747304512.065:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.074568 systemd[1]: Stopped target nss-lookup.target. May 15 10:21:52.075359 systemd[1]: Stopped target remote-cryptsetup.target. May 15 10:21:52.076563 systemd[1]: Stopped target timers.target. May 15 10:21:52.077681 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 10:21:52.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.077782 systemd[1]: Stopped dracut-pre-pivot.service. May 15 10:21:52.082731 kernel: audit: type=1131 audit(1747304512.078:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.078845 systemd[1]: Stopped target initrd.target. May 15 10:21:52.082293 systemd[1]: Stopped target basic.target. May 15 10:21:52.083317 systemd[1]: Stopped target ignition-complete.target. May 15 10:21:52.084532 systemd[1]: Stopped target ignition-diskful.target. May 15 10:21:52.085645 systemd[1]: Stopped target initrd-root-device.target. May 15 10:21:52.086889 systemd[1]: Stopped target remote-fs.target. May 15 10:21:52.088061 systemd[1]: Stopped target remote-fs-pre.target. May 15 10:21:52.089347 systemd[1]: Stopped target sysinit.target. May 15 10:21:52.090398 systemd[1]: Stopped target local-fs.target. May 15 10:21:52.091480 systemd[1]: Stopped target local-fs-pre.target. May 15 10:21:52.092558 systemd[1]: Stopped target swap.target. May 15 10:21:52.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.093564 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 10:21:52.098800 kernel: audit: type=1131 audit(1747304512.094:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.093690 systemd[1]: Stopped dracut-pre-mount.service. May 15 10:21:52.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.094915 systemd[1]: Stopped target cryptsetup.target. May 15 10:21:52.103431 kernel: audit: type=1131 audit(1747304512.098:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.098176 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 10:21:52.098277 systemd[1]: Stopped dracut-initqueue.service. May 15 10:21:52.099514 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 10:21:52.099622 systemd[1]: Stopped ignition-fetch-offline.service. May 15 10:21:52.103075 systemd[1]: Stopped target paths.target. May 15 10:21:52.104074 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 10:21:52.109703 systemd[1]: Stopped systemd-ask-password-console.path. May 15 10:21:52.110504 systemd[1]: Stopped target slices.target. May 15 10:21:52.111637 systemd[1]: Stopped target sockets.target. May 15 10:21:52.112706 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 10:21:52.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.112828 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 10:21:52.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.113955 systemd[1]: ignition-files.service: Deactivated successfully. May 15 10:21:52.114048 systemd[1]: Stopped ignition-files.service. May 15 10:21:52.117813 iscsid[748]: iscsid shutting down. May 15 10:21:52.116217 systemd[1]: Stopping ignition-mount.service... May 15 10:21:52.118922 systemd[1]: Stopping iscsid.service... May 15 10:21:52.120263 systemd[1]: Stopping sysroot-boot.service... May 15 10:21:52.120821 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 10:21:52.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.124623 ignition[898]: INFO : Ignition 2.14.0 May 15 10:21:52.124623 ignition[898]: INFO : Stage: umount May 15 10:21:52.124623 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:21:52.124623 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:21:52.124623 ignition[898]: INFO : umount: umount passed May 15 10:21:52.124623 ignition[898]: INFO : Ignition finished successfully May 15 10:21:52.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.120952 systemd[1]: Stopped systemd-udev-trigger.service. May 15 10:21:52.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.123914 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 10:21:52.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.124007 systemd[1]: Stopped dracut-pre-trigger.service. May 15 10:21:52.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.126453 systemd[1]: iscsid.service: Deactivated successfully. May 15 10:21:52.126555 systemd[1]: Stopped iscsid.service. May 15 10:21:52.127711 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 10:21:52.127792 systemd[1]: Stopped ignition-mount.service. May 15 10:21:52.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.129103 systemd[1]: iscsid.socket: Deactivated successfully. May 15 10:21:52.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.129174 systemd[1]: Closed iscsid.socket. May 15 10:21:52.129720 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 10:21:52.129760 systemd[1]: Stopped ignition-disks.service. May 15 10:21:52.130851 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 10:21:52.130889 systemd[1]: Stopped ignition-kargs.service. May 15 10:21:52.132081 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 10:21:52.132118 systemd[1]: Stopped ignition-setup.service. May 15 10:21:52.133407 systemd[1]: Stopping iscsiuio.service... May 15 10:21:52.136570 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 10:21:52.137003 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 10:21:52.137103 systemd[1]: Stopped iscsiuio.service. May 15 10:21:52.138288 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 10:21:52.138373 systemd[1]: Finished initrd-cleanup.service. May 15 10:21:52.139893 systemd[1]: Stopped target network.target. May 15 10:21:52.140776 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 10:21:52.140807 systemd[1]: Closed iscsiuio.socket. May 15 10:21:52.141983 systemd[1]: Stopping systemd-networkd.service... May 15 10:21:52.142985 systemd[1]: Stopping systemd-resolved.service... May 15 10:21:52.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.150727 systemd-networkd[742]: eth0: DHCPv6 lease lost May 15 10:21:52.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.151899 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 10:21:52.152004 systemd[1]: Stopped systemd-networkd.service. May 15 10:21:52.153639 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 10:21:52.153747 systemd[1]: Stopped systemd-resolved.service. May 15 10:21:52.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.154967 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 10:21:52.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.154998 systemd[1]: Closed systemd-networkd.socket. May 15 10:21:52.161000 audit: BPF prog-id=9 op=UNLOAD May 15 10:21:52.161000 audit: BPF prog-id=6 op=UNLOAD May 15 10:21:52.157125 systemd[1]: Stopping network-cleanup.service... May 15 10:21:52.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.157736 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 10:21:52.157788 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 10:21:52.159271 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:21:52.159311 systemd[1]: Stopped systemd-sysctl.service. May 15 10:21:52.161495 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 10:21:52.161536 systemd[1]: Stopped systemd-modules-load.service. May 15 10:21:52.163363 systemd[1]: Stopping systemd-udevd.service... May 15 10:21:52.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.167217 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 10:21:52.170073 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 10:21:52.170186 systemd[1]: Stopped network-cleanup.service. May 15 10:21:52.174252 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 10:21:52.174378 systemd[1]: Stopped systemd-udevd.service. May 15 10:21:52.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.175781 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 10:21:52.175821 systemd[1]: Closed systemd-udevd-control.socket. May 15 10:21:52.176614 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 10:21:52.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.176649 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 10:21:52.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.178180 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 10:21:52.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.178226 systemd[1]: Stopped dracut-pre-udev.service. May 15 10:21:52.179394 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 10:21:52.179432 systemd[1]: Stopped dracut-cmdline.service. May 15 10:21:52.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.180723 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 10:21:52.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.180757 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 10:21:52.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.182582 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 10:21:52.183734 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 10:21:52.183792 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 15 10:21:52.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.185526 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 10:21:52.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.185564 systemd[1]: Stopped kmod-static-nodes.service. May 15 10:21:52.186316 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 10:21:52.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.186353 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 10:21:52.188358 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 10:21:52.188799 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 10:21:52.188882 systemd[1]: Stopped sysroot-boot.service. May 15 10:21:52.190348 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 10:21:52.190419 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 10:21:52.191735 systemd[1]: Reached target initrd-switch-root.target. May 15 10:21:52.193001 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 10:21:52.193048 systemd[1]: Stopped initrd-setup-root.service. May 15 10:21:52.194911 systemd[1]: Starting initrd-switch-root.service... May 15 10:21:52.200945 systemd[1]: Switching root. May 15 10:21:52.213012 systemd-journald[289]: Journal stopped May 15 10:21:54.242033 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). May 15 10:21:54.242097 kernel: SELinux: Class mctp_socket not defined in policy. May 15 10:21:54.242110 kernel: SELinux: Class anon_inode not defined in policy. May 15 10:21:54.242120 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 10:21:54.242133 kernel: SELinux: policy capability network_peer_controls=1 May 15 10:21:54.242147 kernel: SELinux: policy capability open_perms=1 May 15 10:21:54.242157 kernel: SELinux: policy capability extended_socket_class=1 May 15 10:21:54.242167 kernel: SELinux: policy capability always_check_network=0 May 15 10:21:54.242176 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 10:21:54.242185 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 10:21:54.242202 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 10:21:54.242214 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 10:21:54.242226 systemd[1]: Successfully loaded SELinux policy in 35.430ms. May 15 10:21:54.242246 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.010ms. May 15 10:21:54.242258 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:21:54.242269 systemd[1]: Detected virtualization kvm. May 15 10:21:54.242283 systemd[1]: Detected architecture arm64. May 15 10:21:54.242293 systemd[1]: Detected first boot. May 15 10:21:54.242304 systemd[1]: Initializing machine ID from VM UUID. May 15 10:21:54.242314 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 10:21:54.242325 systemd[1]: Populated /etc with preset unit settings. May 15 10:21:54.242335 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:21:54.242347 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:21:54.242359 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:21:54.242372 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 10:21:54.242382 systemd[1]: Stopped initrd-switch-root.service. May 15 10:21:54.242395 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 10:21:54.242406 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 10:21:54.242440 systemd[1]: Created slice system-addon\x2drun.slice. May 15 10:21:54.242451 systemd[1]: Created slice system-getty.slice. May 15 10:21:54.242462 systemd[1]: Created slice system-modprobe.slice. May 15 10:21:54.242473 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 10:21:54.242486 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 10:21:54.242496 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 10:21:54.242506 systemd[1]: Created slice user.slice. May 15 10:21:54.242517 systemd[1]: Started systemd-ask-password-console.path. May 15 10:21:54.242527 systemd[1]: Started systemd-ask-password-wall.path. May 15 10:21:54.242538 systemd[1]: Set up automount boot.automount. May 15 10:21:54.242553 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 10:21:54.242565 systemd[1]: Stopped target initrd-switch-root.target. May 15 10:21:54.242575 systemd[1]: Stopped target initrd-fs.target. May 15 10:21:54.242587 systemd[1]: Stopped target initrd-root-fs.target. May 15 10:21:54.242604 systemd[1]: Reached target integritysetup.target. May 15 10:21:54.242615 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:21:54.242627 systemd[1]: Reached target remote-fs.target. May 15 10:21:54.242637 systemd[1]: Reached target slices.target. May 15 10:21:54.242648 systemd[1]: Reached target swap.target. May 15 10:21:54.242658 systemd[1]: Reached target torcx.target. May 15 10:21:54.242678 systemd[1]: Reached target veritysetup.target. May 15 10:21:54.242692 systemd[1]: Listening on systemd-coredump.socket. May 15 10:21:54.242733 systemd[1]: Listening on systemd-initctl.socket. May 15 10:21:54.242750 systemd[1]: Listening on systemd-networkd.socket. May 15 10:21:54.242761 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:21:54.242772 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:21:54.242782 systemd[1]: Listening on systemd-userdbd.socket. May 15 10:21:54.242792 systemd[1]: Mounting dev-hugepages.mount... May 15 10:21:54.242802 systemd[1]: Mounting dev-mqueue.mount... May 15 10:21:54.242812 systemd[1]: Mounting media.mount... May 15 10:21:54.242823 systemd[1]: Mounting sys-kernel-debug.mount... May 15 10:21:54.242850 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 10:21:54.242863 systemd[1]: Mounting tmp.mount... May 15 10:21:54.242873 systemd[1]: Starting flatcar-tmpfiles.service... May 15 10:21:54.242886 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:21:54.242897 systemd[1]: Starting kmod-static-nodes.service... May 15 10:21:54.242907 systemd[1]: Starting modprobe@configfs.service... May 15 10:21:54.242917 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:21:54.242927 systemd[1]: Starting modprobe@drm.service... May 15 10:21:54.242938 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:21:54.242950 systemd[1]: Starting modprobe@fuse.service... May 15 10:21:54.242960 systemd[1]: Starting modprobe@loop.service... May 15 10:21:54.242972 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 10:21:54.242983 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 10:21:54.242993 systemd[1]: Stopped systemd-fsck-root.service. May 15 10:21:54.243003 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 10:21:54.243014 systemd[1]: Stopped systemd-fsck-usr.service. May 15 10:21:54.243024 systemd[1]: Stopped systemd-journald.service. May 15 10:21:54.243035 kernel: fuse: init (API version 7.34) May 15 10:21:54.243045 systemd[1]: Starting systemd-journald.service... May 15 10:21:54.243055 kernel: loop: module loaded May 15 10:21:54.243065 systemd[1]: Starting systemd-modules-load.service... May 15 10:21:54.243075 systemd[1]: Starting systemd-network-generator.service... May 15 10:21:54.243086 systemd[1]: Starting systemd-remount-fs.service... May 15 10:21:54.243096 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:21:54.243108 systemd[1]: verity-setup.service: Deactivated successfully. May 15 10:21:54.243118 systemd[1]: Stopped verity-setup.service. May 15 10:21:54.243133 systemd[1]: Mounted dev-hugepages.mount. May 15 10:21:54.243144 systemd[1]: Mounted dev-mqueue.mount. May 15 10:21:54.243155 systemd[1]: Mounted media.mount. May 15 10:21:54.243165 systemd[1]: Mounted sys-kernel-debug.mount. May 15 10:21:54.243175 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 10:21:54.243186 systemd[1]: Mounted tmp.mount. May 15 10:21:54.243196 systemd[1]: Finished kmod-static-nodes.service. May 15 10:21:54.243206 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 10:21:54.243216 systemd[1]: Finished modprobe@configfs.service. May 15 10:21:54.243229 systemd-journald[1001]: Journal started May 15 10:21:54.243273 systemd-journald[1001]: Runtime Journal (/run/log/journal/e735df9c4d094421b436398dfde0b45a) is 6.0M, max 48.7M, 42.6M free. May 15 10:21:52.275000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 10:21:52.376000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:21:52.376000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:21:52.376000 audit: BPF prog-id=10 op=LOAD May 15 10:21:52.376000 audit: BPF prog-id=10 op=UNLOAD May 15 10:21:52.376000 audit: BPF prog-id=11 op=LOAD May 15 10:21:52.376000 audit: BPF prog-id=11 op=UNLOAD May 15 10:21:52.418000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 15 10:21:52.418000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf040 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:21:52.418000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 10:21:52.420000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 15 10:21:52.420000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5979 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:21:52.420000 audit: CWD cwd="/" May 15 10:21:52.420000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:21:52.420000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:21:52.420000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 10:21:54.114000 audit: BPF prog-id=12 op=LOAD May 15 10:21:54.114000 audit: BPF prog-id=3 op=UNLOAD May 15 10:21:54.114000 audit: BPF prog-id=13 op=LOAD May 15 10:21:54.114000 audit: BPF prog-id=14 op=LOAD May 15 10:21:54.114000 audit: BPF prog-id=4 op=UNLOAD May 15 10:21:54.114000 audit: BPF prog-id=5 op=UNLOAD May 15 10:21:54.115000 audit: BPF prog-id=15 op=LOAD May 15 10:21:54.115000 audit: BPF prog-id=12 op=UNLOAD May 15 10:21:54.115000 audit: BPF prog-id=16 op=LOAD May 15 10:21:54.115000 audit: BPF prog-id=17 op=LOAD May 15 10:21:54.115000 audit: BPF prog-id=13 op=UNLOAD May 15 10:21:54.115000 audit: BPF prog-id=14 op=UNLOAD May 15 10:21:54.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.124000 audit: BPF prog-id=15 op=UNLOAD May 15 10:21:54.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.211000 audit: BPF prog-id=18 op=LOAD May 15 10:21:54.212000 audit: BPF prog-id=19 op=LOAD May 15 10:21:54.212000 audit: BPF prog-id=20 op=LOAD May 15 10:21:54.212000 audit: BPF prog-id=16 op=UNLOAD May 15 10:21:54.212000 audit: BPF prog-id=17 op=UNLOAD May 15 10:21:54.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.240000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 10:21:54.240000 audit[1001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffec8988b0 a2=4000 a3=1 items=0 ppid=1 pid=1001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:21:54.240000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 10:21:54.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.417881 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:21:54.113402 systemd[1]: Queued start job for default target multi-user.target. May 15 10:21:52.418124 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 10:21:54.113413 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 10:21:52.418142 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 10:21:54.116914 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 10:21:52.418171 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 15 10:21:52.418181 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=debug msg="skipped missing lower profile" missing profile=oem May 15 10:21:52.418207 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 15 10:21:52.418218 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 15 10:21:52.418415 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 15 10:21:52.418446 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 10:21:52.418457 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 10:21:52.419054 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 15 10:21:54.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:52.419090 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 15 10:21:52.419108 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.100: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.100 May 15 10:21:54.245746 systemd[1]: Started systemd-journald.service. May 15 10:21:52.419121 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 15 10:21:52.419139 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.100: no such file or directory" path=/var/lib/torcx/store/3510.3.100 May 15 10:21:52.419152 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 15 10:21:53.846774 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:53Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:21:54.246056 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:21:53.847036 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:53Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:21:53.847141 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:53Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:21:54.246212 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:21:53.847299 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:53Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:21:53.847350 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:53Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 15 10:21:53.847402 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:21:53Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 15 10:21:54.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.247358 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:21:54.247518 systemd[1]: Finished modprobe@drm.service. May 15 10:21:54.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.248461 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:21:54.248606 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:21:54.249482 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 10:21:54.249629 systemd[1]: Finished modprobe@fuse.service. May 15 10:21:54.250530 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:21:54.250685 systemd[1]: Finished modprobe@loop.service. May 15 10:21:54.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.251551 systemd[1]: Finished flatcar-tmpfiles.service. May 15 10:21:54.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.252467 systemd[1]: Finished systemd-modules-load.service. May 15 10:21:54.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.253488 systemd[1]: Finished systemd-network-generator.service. May 15 10:21:54.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.254467 systemd[1]: Finished systemd-remount-fs.service. May 15 10:21:54.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.255620 systemd[1]: Reached target network-pre.target. May 15 10:21:54.257394 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 10:21:54.259141 systemd[1]: Mounting sys-kernel-config.mount... May 15 10:21:54.259729 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 10:21:54.261339 systemd[1]: Starting systemd-hwdb-update.service... May 15 10:21:54.262997 systemd[1]: Starting systemd-journal-flush.service... May 15 10:21:54.263855 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:21:54.264773 systemd[1]: Starting systemd-random-seed.service... May 15 10:21:54.265487 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:21:54.266532 systemd[1]: Starting systemd-sysctl.service... May 15 10:21:54.268385 systemd[1]: Starting systemd-sysusers.service... May 15 10:21:54.270377 systemd-journald[1001]: Time spent on flushing to /var/log/journal/e735df9c4d094421b436398dfde0b45a is 16.121ms for 1004 entries. May 15 10:21:54.270377 systemd-journald[1001]: System Journal (/var/log/journal/e735df9c4d094421b436398dfde0b45a) is 8.0M, max 195.6M, 187.6M free. May 15 10:21:54.297481 systemd-journald[1001]: Received client request to flush runtime journal. May 15 10:21:54.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.272262 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 10:21:54.298639 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 10:21:54.273250 systemd[1]: Mounted sys-kernel-config.mount. May 15 10:21:54.279004 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:21:54.281266 systemd[1]: Starting systemd-udev-settle.service... May 15 10:21:54.291480 systemd[1]: Finished systemd-random-seed.service. May 15 10:21:54.292613 systemd[1]: Finished systemd-sysctl.service. May 15 10:21:54.293662 systemd[1]: Reached target first-boot-complete.target. May 15 10:21:54.297545 systemd[1]: Finished systemd-sysusers.service. May 15 10:21:54.298569 systemd[1]: Finished systemd-journal-flush.service. May 15 10:21:54.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.300541 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:21:54.321930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:21:54.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.633341 systemd[1]: Finished systemd-hwdb-update.service. May 15 10:21:54.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.634000 audit: BPF prog-id=21 op=LOAD May 15 10:21:54.634000 audit: BPF prog-id=22 op=LOAD May 15 10:21:54.634000 audit: BPF prog-id=7 op=UNLOAD May 15 10:21:54.634000 audit: BPF prog-id=8 op=UNLOAD May 15 10:21:54.635556 systemd[1]: Starting systemd-udevd.service... May 15 10:21:54.653355 systemd-udevd[1037]: Using default interface naming scheme 'v252'. May 15 10:21:54.665997 systemd[1]: Started systemd-udevd.service. May 15 10:21:54.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.667000 audit: BPF prog-id=23 op=LOAD May 15 10:21:54.669597 systemd[1]: Starting systemd-networkd.service... May 15 10:21:54.674000 audit: BPF prog-id=24 op=LOAD May 15 10:21:54.674000 audit: BPF prog-id=25 op=LOAD May 15 10:21:54.674000 audit: BPF prog-id=26 op=LOAD May 15 10:21:54.675738 systemd[1]: Starting systemd-userdbd.service... May 15 10:21:54.693355 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 15 10:21:54.712358 systemd[1]: Started systemd-userdbd.service. May 15 10:21:54.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.725553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:21:54.766032 systemd[1]: Finished systemd-udev-settle.service. May 15 10:21:54.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.768004 systemd[1]: Starting lvm2-activation-early.service... May 15 10:21:54.771164 systemd-networkd[1044]: lo: Link UP May 15 10:21:54.771402 systemd-networkd[1044]: lo: Gained carrier May 15 10:21:54.771971 systemd-networkd[1044]: Enumeration completed May 15 10:21:54.772147 systemd[1]: Started systemd-networkd.service. May 15 10:21:54.772240 systemd-networkd[1044]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:21:54.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.773573 systemd-networkd[1044]: eth0: Link UP May 15 10:21:54.773813 systemd-networkd[1044]: eth0: Gained carrier May 15 10:21:54.782340 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:21:54.799822 systemd-networkd[1044]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:21:54.804545 systemd[1]: Finished lvm2-activation-early.service. May 15 10:21:54.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.805451 systemd[1]: Reached target cryptsetup.target. May 15 10:21:54.807307 systemd[1]: Starting lvm2-activation.service... May 15 10:21:54.810916 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:21:54.840620 systemd[1]: Finished lvm2-activation.service. May 15 10:21:54.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.841486 systemd[1]: Reached target local-fs-pre.target. May 15 10:21:54.842215 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 10:21:54.842244 systemd[1]: Reached target local-fs.target. May 15 10:21:54.842895 systemd[1]: Reached target machines.target. May 15 10:21:54.844807 systemd[1]: Starting ldconfig.service... May 15 10:21:54.845806 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:21:54.845876 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:21:54.847019 systemd[1]: Starting systemd-boot-update.service... May 15 10:21:54.848878 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 10:21:54.850839 systemd[1]: Starting systemd-machine-id-commit.service... May 15 10:21:54.852745 systemd[1]: Starting systemd-sysext.service... May 15 10:21:54.855779 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1073 (bootctl) May 15 10:21:54.856783 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 10:21:54.867310 systemd[1]: Unmounting usr-share-oem.mount... May 15 10:21:54.870557 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 10:21:54.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.875995 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 10:21:54.876193 systemd[1]: Unmounted usr-share-oem.mount. May 15 10:21:54.889699 kernel: loop0: detected capacity change from 0 to 189592 May 15 10:21:54.926187 systemd[1]: Finished systemd-machine-id-commit.service. May 15 10:21:54.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.933688 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 10:21:54.950392 systemd-fsck[1084]: fsck.fat 4.2 (2021-01-31) May 15 10:21:54.950392 systemd-fsck[1084]: /dev/vda1: 236 files, 117182/258078 clusters May 15 10:21:54.952477 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 10:21:54.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.955745 kernel: loop1: detected capacity change from 0 to 189592 May 15 10:21:54.960085 (sd-sysext)[1087]: Using extensions 'kubernetes'. May 15 10:21:54.960932 (sd-sysext)[1087]: Merged extensions into '/usr'. May 15 10:21:54.977527 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:21:54.978868 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:21:54.980780 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:21:54.982788 systemd[1]: Starting modprobe@loop.service... May 15 10:21:54.983553 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:21:54.983727 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:21:54.984595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:21:54.984736 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:21:54.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.986025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:21:54.986129 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:21:54.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.987397 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:21:54.987505 systemd[1]: Finished modprobe@loop.service. May 15 10:21:54.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:54.988760 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:21:54.988861 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:21:55.041688 ldconfig[1072]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 10:21:55.044886 systemd[1]: Finished ldconfig.service. May 15 10:21:55.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.230352 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 10:21:55.232243 systemd[1]: Mounting boot.mount... May 15 10:21:55.234030 systemd[1]: Mounting usr-share-oem.mount... May 15 10:21:55.238977 systemd[1]: Mounted usr-share-oem.mount. May 15 10:21:55.241563 systemd[1]: Finished systemd-sysext.service. May 15 10:21:55.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.242454 systemd[1]: Mounted boot.mount. May 15 10:21:55.245132 systemd[1]: Starting ensure-sysext.service... May 15 10:21:55.246770 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 10:21:55.251314 systemd[1]: Finished systemd-boot-update.service. May 15 10:21:55.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.252220 systemd[1]: Reloading. May 15 10:21:55.256226 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 10:21:55.257437 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 10:21:55.258792 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 10:21:55.294095 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-05-15T10:21:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:21:55.294126 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-05-15T10:21:55Z" level=info msg="torcx already run" May 15 10:21:55.355649 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:21:55.355682 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:21:55.373854 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:21:55.422000 audit: BPF prog-id=27 op=LOAD May 15 10:21:55.422000 audit: BPF prog-id=24 op=UNLOAD May 15 10:21:55.423000 audit: BPF prog-id=28 op=LOAD May 15 10:21:55.423000 audit: BPF prog-id=29 op=LOAD May 15 10:21:55.423000 audit: BPF prog-id=25 op=UNLOAD May 15 10:21:55.423000 audit: BPF prog-id=26 op=UNLOAD May 15 10:21:55.423000 audit: BPF prog-id=30 op=LOAD May 15 10:21:55.423000 audit: BPF prog-id=31 op=LOAD May 15 10:21:55.423000 audit: BPF prog-id=21 op=UNLOAD May 15 10:21:55.423000 audit: BPF prog-id=22 op=UNLOAD May 15 10:21:55.424000 audit: BPF prog-id=32 op=LOAD May 15 10:21:55.424000 audit: BPF prog-id=18 op=UNLOAD May 15 10:21:55.424000 audit: BPF prog-id=33 op=LOAD May 15 10:21:55.424000 audit: BPF prog-id=34 op=LOAD May 15 10:21:55.424000 audit: BPF prog-id=19 op=UNLOAD May 15 10:21:55.424000 audit: BPF prog-id=20 op=UNLOAD May 15 10:21:55.426000 audit: BPF prog-id=35 op=LOAD May 15 10:21:55.426000 audit: BPF prog-id=23 op=UNLOAD May 15 10:21:55.429531 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 10:21:55.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.435761 systemd[1]: Starting audit-rules.service... May 15 10:21:55.437649 systemd[1]: Starting clean-ca-certificates.service... May 15 10:21:55.440169 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 10:21:55.442000 audit: BPF prog-id=36 op=LOAD May 15 10:21:55.448804 systemd[1]: Starting systemd-resolved.service... May 15 10:21:55.453000 audit: BPF prog-id=37 op=LOAD May 15 10:21:55.455062 systemd[1]: Starting systemd-timesyncd.service... May 15 10:21:55.457388 systemd[1]: Starting systemd-update-utmp.service... May 15 10:21:55.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.461000 audit[1165]: SYSTEM_BOOT pid=1165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 10:21:55.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.460943 systemd[1]: Finished clean-ca-certificates.service. May 15 10:21:55.462053 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 10:21:55.466478 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:21:55.468016 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:21:55.470054 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:21:55.472207 systemd[1]: Starting modprobe@loop.service... May 15 10:21:55.472981 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:21:55.473201 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:21:55.474833 systemd[1]: Starting systemd-update-done.service... May 15 10:21:55.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.475642 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:21:55.477485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:21:55.477629 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:21:55.478958 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:21:55.479070 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:21:55.480336 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:21:55.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.480454 systemd[1]: Finished modprobe@loop.service. May 15 10:21:55.483478 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:21:55.485123 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:21:55.487175 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:21:55.489204 systemd[1]: Starting modprobe@loop.service... May 15 10:21:55.490180 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:21:55.490344 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:21:55.490476 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:21:55.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.491593 systemd[1]: Finished systemd-update-utmp.service. May 15 10:21:55.492983 systemd[1]: Finished systemd-update-done.service. May 15 10:21:55.494152 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:21:55.494271 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:21:55.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.495447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:21:55.495568 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:21:55.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.496837 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:21:55.496966 systemd[1]: Finished modprobe@loop.service. May 15 10:21:55.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.499242 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:21:55.499352 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:21:55.502116 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:21:55.503644 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:21:55.505883 systemd[1]: Starting modprobe@drm.service... May 15 10:21:55.508098 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:21:55.510259 systemd[1]: Starting modprobe@loop.service... May 15 10:21:55.511120 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:21:55.511311 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:21:55.512874 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 10:21:55.513915 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:21:55.518926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:21:55.519080 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:21:55.519000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 10:21:55.519000 audit[1182]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcf649ef0 a2=420 a3=0 items=0 ppid=1154 pid=1182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:21:55.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.519000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 10:21:55.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:21:55.520217 augenrules[1182]: No rules May 15 10:21:55.520289 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:21:55.520412 systemd[1]: Finished modprobe@drm.service. May 15 10:21:55.521836 systemd[1]: Finished audit-rules.service. May 15 10:21:55.522939 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:21:55.523072 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:21:55.524230 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:21:55.524358 systemd[1]: Finished modprobe@loop.service. May 15 10:21:55.525971 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:21:55.526051 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:21:55.527266 systemd[1]: Finished ensure-sysext.service. May 15 10:21:55.535304 systemd-resolved[1158]: Positive Trust Anchors: May 15 10:21:55.535318 systemd-resolved[1158]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:21:55.535349 systemd-resolved[1158]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:21:55.551284 systemd[1]: Started systemd-timesyncd.service. May 15 10:21:55.552199 systemd-timesyncd[1162]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 10:21:55.552264 systemd-timesyncd[1162]: Initial clock synchronization to Thu 2025-05-15 10:21:55.911099 UTC. May 15 10:21:55.552553 systemd[1]: Reached target time-set.target. May 15 10:21:55.559858 systemd-resolved[1158]: Defaulting to hostname 'linux'. May 15 10:21:55.561253 systemd[1]: Started systemd-resolved.service. May 15 10:21:55.562111 systemd[1]: Reached target network.target. May 15 10:21:55.562857 systemd[1]: Reached target nss-lookup.target. May 15 10:21:55.563616 systemd[1]: Reached target sysinit.target. May 15 10:21:55.564444 systemd[1]: Started motdgen.path. May 15 10:21:55.565139 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 10:21:55.566336 systemd[1]: Started logrotate.timer. May 15 10:21:55.567152 systemd[1]: Started mdadm.timer. May 15 10:21:55.567808 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 10:21:55.568592 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 10:21:55.568623 systemd[1]: Reached target paths.target. May 15 10:21:55.569341 systemd[1]: Reached target timers.target. May 15 10:21:55.570387 systemd[1]: Listening on dbus.socket. May 15 10:21:55.572171 systemd[1]: Starting docker.socket... May 15 10:21:55.575535 systemd[1]: Listening on sshd.socket. May 15 10:21:55.576398 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:21:55.576849 systemd[1]: Listening on docker.socket. May 15 10:21:55.577648 systemd[1]: Reached target sockets.target. May 15 10:21:55.578385 systemd[1]: Reached target basic.target. May 15 10:21:55.579164 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:21:55.579202 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:21:55.580188 systemd[1]: Starting containerd.service... May 15 10:21:55.581939 systemd[1]: Starting dbus.service... May 15 10:21:55.583678 systemd[1]: Starting enable-oem-cloudinit.service... May 15 10:21:55.585653 systemd[1]: Starting extend-filesystems.service... May 15 10:21:55.586561 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 10:21:55.588064 systemd[1]: Starting motdgen.service... May 15 10:21:55.591984 jq[1197]: false May 15 10:21:55.592677 systemd[1]: Starting prepare-helm.service... May 15 10:21:55.594490 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 10:21:55.596488 systemd[1]: Starting sshd-keygen.service... May 15 10:21:55.599391 systemd[1]: Starting systemd-logind.service... May 15 10:21:55.600195 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:21:55.600284 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 10:21:55.600968 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 10:21:55.601887 systemd[1]: Starting update-engine.service... May 15 10:21:55.604313 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 10:21:55.607756 jq[1212]: true May 15 10:21:55.607209 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 10:21:55.607400 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 10:21:55.608602 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 10:21:55.609114 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 10:21:55.616829 jq[1220]: true May 15 10:21:55.617420 extend-filesystems[1198]: Found loop1 May 15 10:21:55.619357 tar[1218]: linux-arm64/helm May 15 10:21:55.619633 extend-filesystems[1198]: Found vda May 15 10:21:55.620399 extend-filesystems[1198]: Found vda1 May 15 10:21:55.621048 extend-filesystems[1198]: Found vda2 May 15 10:21:55.621797 extend-filesystems[1198]: Found vda3 May 15 10:21:55.622192 systemd[1]: motdgen.service: Deactivated successfully. May 15 10:21:55.622342 systemd[1]: Finished motdgen.service. May 15 10:21:55.622662 extend-filesystems[1198]: Found usr May 15 10:21:55.624119 extend-filesystems[1198]: Found vda4 May 15 10:21:55.624119 extend-filesystems[1198]: Found vda6 May 15 10:21:55.624119 extend-filesystems[1198]: Found vda7 May 15 10:21:55.624119 extend-filesystems[1198]: Found vda9 May 15 10:21:55.624119 extend-filesystems[1198]: Checking size of /dev/vda9 May 15 10:21:55.629036 dbus-daemon[1196]: [system] SELinux support is enabled May 15 10:21:55.629196 systemd[1]: Started dbus.service. May 15 10:21:55.631930 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 10:21:55.631961 systemd[1]: Reached target system-config.target. May 15 10:21:55.632991 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 10:21:55.633012 systemd[1]: Reached target user-config.target. May 15 10:21:55.636623 extend-filesystems[1198]: Resized partition /dev/vda9 May 15 10:21:55.644224 extend-filesystems[1238]: resize2fs 1.46.5 (30-Dec-2021) May 15 10:21:55.656779 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 10:21:55.696903 systemd-logind[1207]: Watching system buttons on /dev/input/event0 (Power Button) May 15 10:21:55.697446 systemd-logind[1207]: New seat seat0. May 15 10:21:55.698977 systemd[1]: Started systemd-logind.service. May 15 10:21:55.707881 update_engine[1211]: I0515 10:21:55.707593 1211 main.cc:92] Flatcar Update Engine starting May 15 10:21:55.709985 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 10:21:55.711239 systemd[1]: Started update-engine.service. May 15 10:21:55.715098 systemd[1]: Started locksmithd.service. May 15 10:21:55.727406 extend-filesystems[1238]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 10:21:55.727406 extend-filesystems[1238]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 10:21:55.727406 extend-filesystems[1238]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 10:21:55.732665 update_engine[1211]: I0515 10:21:55.716573 1211 update_check_scheduler.cc:74] Next update check in 2m33s May 15 10:21:55.729446 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 10:21:55.732783 extend-filesystems[1198]: Resized filesystem in /dev/vda9 May 15 10:21:55.729664 systemd[1]: Finished extend-filesystems.service. May 15 10:21:55.733863 bash[1242]: Updated "/home/core/.ssh/authorized_keys" May 15 10:21:55.734155 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 10:21:55.738465 env[1221]: time="2025-05-15T10:21:55.738381120Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 10:21:55.756483 env[1221]: time="2025-05-15T10:21:55.756434360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 10:21:55.756625 env[1221]: time="2025-05-15T10:21:55.756611040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 10:21:55.761882 env[1221]: time="2025-05-15T10:21:55.761835440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 10:21:55.761882 env[1221]: time="2025-05-15T10:21:55.761874320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 10:21:55.762137 env[1221]: time="2025-05-15T10:21:55.762112440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:21:55.762192 env[1221]: time="2025-05-15T10:21:55.762136160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 10:21:55.762192 env[1221]: time="2025-05-15T10:21:55.762151240Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 10:21:55.762192 env[1221]: time="2025-05-15T10:21:55.762161320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 10:21:55.762263 env[1221]: time="2025-05-15T10:21:55.762233280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 10:21:55.762516 env[1221]: time="2025-05-15T10:21:55.762493600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 10:21:55.762652 env[1221]: time="2025-05-15T10:21:55.762628160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:21:55.762652 env[1221]: time="2025-05-15T10:21:55.762649200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 10:21:55.762736 env[1221]: time="2025-05-15T10:21:55.762720200Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 10:21:55.762736 env[1221]: time="2025-05-15T10:21:55.762732920Z" level=info msg="metadata content store policy set" policy=shared May 15 10:21:55.769195 env[1221]: time="2025-05-15T10:21:55.769159880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 10:21:55.769195 env[1221]: time="2025-05-15T10:21:55.769196480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 10:21:55.769299 env[1221]: time="2025-05-15T10:21:55.769209960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 10:21:55.769299 env[1221]: time="2025-05-15T10:21:55.769242880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 10:21:55.769299 env[1221]: time="2025-05-15T10:21:55.769257960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 10:21:55.769299 env[1221]: time="2025-05-15T10:21:55.769272120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 10:21:55.769299 env[1221]: time="2025-05-15T10:21:55.769284600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 10:21:55.769664 env[1221]: time="2025-05-15T10:21:55.769640440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 10:21:55.769736 env[1221]: time="2025-05-15T10:21:55.769683440Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 10:21:55.769736 env[1221]: time="2025-05-15T10:21:55.769699720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 10:21:55.769736 env[1221]: time="2025-05-15T10:21:55.769712280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 10:21:55.769736 env[1221]: time="2025-05-15T10:21:55.769724960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 10:21:55.769882 env[1221]: time="2025-05-15T10:21:55.769845880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 10:21:55.769950 env[1221]: time="2025-05-15T10:21:55.769932640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 10:21:55.770172 env[1221]: time="2025-05-15T10:21:55.770151440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 10:21:55.770210 env[1221]: time="2025-05-15T10:21:55.770181120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770210 env[1221]: time="2025-05-15T10:21:55.770196480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 10:21:55.770313 env[1221]: time="2025-05-15T10:21:55.770295960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770313 env[1221]: time="2025-05-15T10:21:55.770312200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770386 env[1221]: time="2025-05-15T10:21:55.770325080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770386 env[1221]: time="2025-05-15T10:21:55.770340440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770386 env[1221]: time="2025-05-15T10:21:55.770352240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770386 env[1221]: time="2025-05-15T10:21:55.770363400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770386 env[1221]: time="2025-05-15T10:21:55.770374120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770386 env[1221]: time="2025-05-15T10:21:55.770386480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770510 env[1221]: time="2025-05-15T10:21:55.770399280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 10:21:55.770630 env[1221]: time="2025-05-15T10:21:55.770525240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770630 env[1221]: time="2025-05-15T10:21:55.770547200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770630 env[1221]: time="2025-05-15T10:21:55.770559640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770630 env[1221]: time="2025-05-15T10:21:55.770570880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 10:21:55.770630 env[1221]: time="2025-05-15T10:21:55.770594880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 10:21:55.770630 env[1221]: time="2025-05-15T10:21:55.770608080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 10:21:55.770630 env[1221]: time="2025-05-15T10:21:55.770625320Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 10:21:55.770818 env[1221]: time="2025-05-15T10:21:55.770660400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 10:21:55.770914 env[1221]: time="2025-05-15T10:21:55.770859680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.770918560Z" level=info msg="Connect containerd service" May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.770959920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.771705720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.772049520Z" level=info msg="Start subscribing containerd event" May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.772221480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.773105000Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.773105920Z" level=info msg="Start recovering state" May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.773177680Z" level=info msg="containerd successfully booted in 0.035574s" May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.773219320Z" level=info msg="Start event monitor" May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.773251120Z" level=info msg="Start snapshots syncer" May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.773263000Z" level=info msg="Start cni network conf syncer for default" May 15 10:21:55.774017 env[1221]: time="2025-05-15T10:21:55.773270840Z" level=info msg="Start streaming server" May 15 10:21:55.773275 systemd[1]: Started containerd.service. May 15 10:21:55.798912 locksmithd[1249]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 10:21:55.847917 systemd-networkd[1044]: eth0: Gained IPv6LL May 15 10:21:55.849800 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 10:21:55.850805 systemd[1]: Reached target network-online.target. May 15 10:21:55.852905 systemd[1]: Starting kubelet.service... May 15 10:21:56.046756 tar[1218]: linux-arm64/LICENSE May 15 10:21:56.046921 tar[1218]: linux-arm64/README.md May 15 10:21:56.051262 systemd[1]: Finished prepare-helm.service. May 15 10:21:56.402175 systemd[1]: Started kubelet.service. May 15 10:21:56.865993 kubelet[1264]: E0515 10:21:56.865880 1264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:21:56.867949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:21:56.868077 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:21:58.922238 sshd_keygen[1213]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 10:21:58.940784 systemd[1]: Finished sshd-keygen.service. May 15 10:21:58.943108 systemd[1]: Starting issuegen.service... May 15 10:21:58.947873 systemd[1]: issuegen.service: Deactivated successfully. May 15 10:21:58.948046 systemd[1]: Finished issuegen.service. May 15 10:21:58.950203 systemd[1]: Starting systemd-user-sessions.service... May 15 10:21:58.956414 systemd[1]: Finished systemd-user-sessions.service. May 15 10:21:58.958685 systemd[1]: Started getty@tty1.service. May 15 10:21:58.960656 systemd[1]: Started serial-getty@ttyAMA0.service. May 15 10:21:58.961647 systemd[1]: Reached target getty.target. May 15 10:21:58.962373 systemd[1]: Reached target multi-user.target. May 15 10:21:58.964254 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 10:21:58.970899 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 10:21:58.971048 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 10:21:58.971919 systemd[1]: Startup finished in 587ms (kernel) + 4.666s (initrd) + 6.733s (userspace) = 11.988s. May 15 10:21:59.931915 systemd[1]: Created slice system-sshd.slice. May 15 10:21:59.933116 systemd[1]: Started sshd@0-10.0.0.110:22-10.0.0.1:41516.service. May 15 10:21:59.982832 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 41516 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:21:59.985125 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:21:59.994896 systemd-logind[1207]: New session 1 of user core. May 15 10:21:59.995908 systemd[1]: Created slice user-500.slice. May 15 10:21:59.997138 systemd[1]: Starting user-runtime-dir@500.service... May 15 10:22:00.006079 systemd[1]: Finished user-runtime-dir@500.service. May 15 10:22:00.007636 systemd[1]: Starting user@500.service... May 15 10:22:00.011174 (systemd)[1289]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 10:22:00.076556 systemd[1289]: Queued start job for default target default.target. May 15 10:22:00.077147 systemd[1289]: Reached target paths.target. May 15 10:22:00.077183 systemd[1289]: Reached target sockets.target. May 15 10:22:00.077195 systemd[1289]: Reached target timers.target. May 15 10:22:00.077206 systemd[1289]: Reached target basic.target. May 15 10:22:00.077250 systemd[1289]: Reached target default.target. May 15 10:22:00.077278 systemd[1289]: Startup finished in 57ms. May 15 10:22:00.077331 systemd[1]: Started user@500.service. May 15 10:22:00.078496 systemd[1]: Started session-1.scope. May 15 10:22:00.132279 systemd[1]: Started sshd@1-10.0.0.110:22-10.0.0.1:41518.service. May 15 10:22:00.176855 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 41518 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:22:00.178470 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:22:00.182093 systemd-logind[1207]: New session 2 of user core. May 15 10:22:00.183347 systemd[1]: Started session-2.scope. May 15 10:22:00.240698 sshd[1298]: pam_unix(sshd:session): session closed for user core May 15 10:22:00.243198 systemd[1]: sshd@1-10.0.0.110:22-10.0.0.1:41518.service: Deactivated successfully. May 15 10:22:00.243776 systemd[1]: session-2.scope: Deactivated successfully. May 15 10:22:00.244318 systemd-logind[1207]: Session 2 logged out. Waiting for processes to exit. May 15 10:22:00.245487 systemd[1]: Started sshd@2-10.0.0.110:22-10.0.0.1:41530.service. May 15 10:22:00.246196 systemd-logind[1207]: Removed session 2. May 15 10:22:00.285092 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 41530 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:22:00.286542 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:22:00.291214 systemd-logind[1207]: New session 3 of user core. May 15 10:22:00.292081 systemd[1]: Started session-3.scope. May 15 10:22:00.344595 sshd[1304]: pam_unix(sshd:session): session closed for user core May 15 10:22:00.347520 systemd[1]: sshd@2-10.0.0.110:22-10.0.0.1:41530.service: Deactivated successfully. May 15 10:22:00.348152 systemd[1]: session-3.scope: Deactivated successfully. May 15 10:22:00.348667 systemd-logind[1207]: Session 3 logged out. Waiting for processes to exit. May 15 10:22:00.349835 systemd[1]: Started sshd@3-10.0.0.110:22-10.0.0.1:41544.service. May 15 10:22:00.350602 systemd-logind[1207]: Removed session 3. May 15 10:22:00.397155 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 41544 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:22:00.398845 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:22:00.402652 systemd-logind[1207]: New session 4 of user core. May 15 10:22:00.404276 systemd[1]: Started session-4.scope. May 15 10:22:00.463748 sshd[1311]: pam_unix(sshd:session): session closed for user core May 15 10:22:00.468196 systemd[1]: sshd@3-10.0.0.110:22-10.0.0.1:41544.service: Deactivated successfully. May 15 10:22:00.469641 systemd[1]: session-4.scope: Deactivated successfully. May 15 10:22:00.471276 systemd-logind[1207]: Session 4 logged out. Waiting for processes to exit. May 15 10:22:00.471453 systemd[1]: Started sshd@4-10.0.0.110:22-10.0.0.1:41546.service. May 15 10:22:00.472533 systemd-logind[1207]: Removed session 4. May 15 10:22:00.517381 sshd[1317]: Accepted publickey for core from 10.0.0.1 port 41546 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:22:00.519355 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:22:00.522974 systemd-logind[1207]: New session 5 of user core. May 15 10:22:00.523849 systemd[1]: Started session-5.scope. May 15 10:22:00.594598 sudo[1320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 10:22:00.594864 sudo[1320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 10:22:00.659950 systemd[1]: Starting docker.service... May 15 10:22:00.804276 env[1332]: time="2025-05-15T10:22:00.804142040Z" level=info msg="Starting up" May 15 10:22:00.806626 env[1332]: time="2025-05-15T10:22:00.806471626Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:22:00.806626 env[1332]: time="2025-05-15T10:22:00.806495026Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:22:00.806626 env[1332]: time="2025-05-15T10:22:00.806515429Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:22:00.806626 env[1332]: time="2025-05-15T10:22:00.806527334Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:22:00.812816 env[1332]: time="2025-05-15T10:22:00.812782819Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:22:00.813452 env[1332]: time="2025-05-15T10:22:00.813435343Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:22:00.813500 env[1332]: time="2025-05-15T10:22:00.813462479Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:22:00.813500 env[1332]: time="2025-05-15T10:22:00.813474097Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:22:00.819177 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport196101080-merged.mount: Deactivated successfully. May 15 10:22:00.952401 env[1332]: time="2025-05-15T10:22:00.952359590Z" level=info msg="Loading containers: start." May 15 10:22:01.091695 kernel: Initializing XFRM netlink socket May 15 10:22:01.119351 env[1332]: time="2025-05-15T10:22:01.119301386Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 10:22:01.171876 systemd-networkd[1044]: docker0: Link UP May 15 10:22:01.190068 env[1332]: time="2025-05-15T10:22:01.190022833Z" level=info msg="Loading containers: done." May 15 10:22:01.212477 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4016411739-merged.mount: Deactivated successfully. May 15 10:22:01.216035 env[1332]: time="2025-05-15T10:22:01.215997735Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 10:22:01.216324 env[1332]: time="2025-05-15T10:22:01.216301775Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 10:22:01.216501 env[1332]: time="2025-05-15T10:22:01.216483135Z" level=info msg="Daemon has completed initialization" May 15 10:22:01.233553 systemd[1]: Started docker.service. May 15 10:22:01.241604 env[1332]: time="2025-05-15T10:22:01.241431175Z" level=info msg="API listen on /run/docker.sock" May 15 10:22:02.030270 env[1221]: time="2025-05-15T10:22:02.030215660Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 10:22:02.570963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1900884854.mount: Deactivated successfully. May 15 10:22:04.153508 env[1221]: time="2025-05-15T10:22:04.153445633Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:04.154715 env[1221]: time="2025-05-15T10:22:04.154661943Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:04.156609 env[1221]: time="2025-05-15T10:22:04.156573733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:04.158511 env[1221]: time="2025-05-15T10:22:04.158477074Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:04.159348 env[1221]: time="2025-05-15T10:22:04.159309920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 15 10:22:04.160513 env[1221]: time="2025-05-15T10:22:04.160488862Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 10:22:05.821287 env[1221]: time="2025-05-15T10:22:05.821236054Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:05.822747 env[1221]: time="2025-05-15T10:22:05.822711576Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:05.824477 env[1221]: time="2025-05-15T10:22:05.824438444Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:05.826165 env[1221]: time="2025-05-15T10:22:05.826138839Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:05.827199 env[1221]: time="2025-05-15T10:22:05.827165019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 15 10:22:05.827867 env[1221]: time="2025-05-15T10:22:05.827843937Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 10:22:07.118923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 10:22:07.119102 systemd[1]: Stopped kubelet.service. May 15 10:22:07.120502 systemd[1]: Starting kubelet.service... May 15 10:22:07.205784 systemd[1]: Started kubelet.service. May 15 10:22:07.241344 kubelet[1465]: E0515 10:22:07.241293 1465 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:22:07.243814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:22:07.243945 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:22:07.428697 env[1221]: time="2025-05-15T10:22:07.428569279Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:07.430015 env[1221]: time="2025-05-15T10:22:07.429982932Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:07.432460 env[1221]: time="2025-05-15T10:22:07.431706352Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:07.433285 env[1221]: time="2025-05-15T10:22:07.433259875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:07.434888 env[1221]: time="2025-05-15T10:22:07.434857407Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 15 10:22:07.435587 env[1221]: time="2025-05-15T10:22:07.435402177Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 10:22:08.463545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161323851.mount: Deactivated successfully. May 15 10:22:09.046220 env[1221]: time="2025-05-15T10:22:09.046164779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:09.047748 env[1221]: time="2025-05-15T10:22:09.047706557Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:09.048966 env[1221]: time="2025-05-15T10:22:09.048929875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:09.050144 env[1221]: time="2025-05-15T10:22:09.050115337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:09.050587 env[1221]: time="2025-05-15T10:22:09.050553132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 15 10:22:09.051138 env[1221]: time="2025-05-15T10:22:09.051110949Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 10:22:09.567178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618192460.mount: Deactivated successfully. May 15 10:22:10.293264 env[1221]: time="2025-05-15T10:22:10.293218828Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:10.294705 env[1221]: time="2025-05-15T10:22:10.294646036Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:10.296480 env[1221]: time="2025-05-15T10:22:10.296443871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:10.298401 env[1221]: time="2025-05-15T10:22:10.298363785Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:10.299915 env[1221]: time="2025-05-15T10:22:10.299878193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 10:22:10.300370 env[1221]: time="2025-05-15T10:22:10.300334126Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 10:22:10.733661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089326195.mount: Deactivated successfully. May 15 10:22:10.740312 env[1221]: time="2025-05-15T10:22:10.740213916Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:10.741917 env[1221]: time="2025-05-15T10:22:10.741872111Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:10.745533 env[1221]: time="2025-05-15T10:22:10.745491948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:10.747117 env[1221]: time="2025-05-15T10:22:10.747082761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:10.747612 env[1221]: time="2025-05-15T10:22:10.747552106Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 10:22:10.748117 env[1221]: time="2025-05-15T10:22:10.748089640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 10:22:11.295422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710228693.mount: Deactivated successfully. May 15 10:22:14.020025 env[1221]: time="2025-05-15T10:22:14.019661660Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:14.187425 env[1221]: time="2025-05-15T10:22:14.187363159Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:14.191222 env[1221]: time="2025-05-15T10:22:14.191174516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:14.194037 env[1221]: time="2025-05-15T10:22:14.193996597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:14.195064 env[1221]: time="2025-05-15T10:22:14.195029209Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 15 10:22:17.494819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 10:22:17.495003 systemd[1]: Stopped kubelet.service. May 15 10:22:17.496371 systemd[1]: Starting kubelet.service... May 15 10:22:17.580771 systemd[1]: Started kubelet.service. May 15 10:22:17.615458 kubelet[1498]: E0515 10:22:17.615408 1498 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:22:17.617537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:22:17.617665 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:22:18.787187 systemd[1]: Stopped kubelet.service. May 15 10:22:18.789372 systemd[1]: Starting kubelet.service... May 15 10:22:18.823410 systemd[1]: Reloading. May 15 10:22:18.889018 /usr/lib/systemd/system-generators/torcx-generator[1532]: time="2025-05-15T10:22:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:22:18.889389 /usr/lib/systemd/system-generators/torcx-generator[1532]: time="2025-05-15T10:22:18Z" level=info msg="torcx already run" May 15 10:22:19.029752 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:22:19.029771 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:22:19.048252 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:22:19.132841 systemd[1]: Started kubelet.service. May 15 10:22:19.134519 systemd[1]: Stopping kubelet.service... May 15 10:22:19.134888 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:22:19.135064 systemd[1]: Stopped kubelet.service. May 15 10:22:19.136551 systemd[1]: Starting kubelet.service... May 15 10:22:19.221929 systemd[1]: Started kubelet.service. May 15 10:22:19.260106 kubelet[1576]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:22:19.260106 kubelet[1576]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 10:22:19.260106 kubelet[1576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:22:19.260440 kubelet[1576]: I0515 10:22:19.260198 1576 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:22:20.157649 kubelet[1576]: I0515 10:22:20.157594 1576 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 10:22:20.157649 kubelet[1576]: I0515 10:22:20.157637 1576 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:22:20.157928 kubelet[1576]: I0515 10:22:20.157901 1576 server.go:929] "Client rotation is on, will bootstrap in background" May 15 10:22:20.202012 kubelet[1576]: E0515 10:22:20.201972 1576 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" May 15 10:22:20.203435 kubelet[1576]: I0515 10:22:20.203414 1576 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:22:20.214335 kubelet[1576]: E0515 10:22:20.214301 1576 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 10:22:20.214335 kubelet[1576]: I0515 10:22:20.214334 1576 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 10:22:20.217874 kubelet[1576]: I0515 10:22:20.217847 1576 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:22:20.218786 kubelet[1576]: I0515 10:22:20.218756 1576 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 10:22:20.218955 kubelet[1576]: I0515 10:22:20.218924 1576 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:22:20.219120 kubelet[1576]: I0515 10:22:20.218950 1576 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 10:22:20.219265 kubelet[1576]: I0515 10:22:20.219246 1576 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:22:20.219265 kubelet[1576]: I0515 10:22:20.219259 1576 container_manager_linux.go:300] "Creating device plugin manager" May 15 10:22:20.219467 kubelet[1576]: I0515 10:22:20.219434 1576 state_mem.go:36] "Initialized new in-memory state store" May 15 10:22:20.223968 kubelet[1576]: I0515 10:22:20.223936 1576 kubelet.go:408] "Attempting to sync node with API server" May 15 10:22:20.223968 kubelet[1576]: I0515 10:22:20.223967 1576 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:22:20.224060 kubelet[1576]: I0515 10:22:20.223995 1576 kubelet.go:314] "Adding apiserver pod source" May 15 10:22:20.224060 kubelet[1576]: I0515 10:22:20.224022 1576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:22:20.225408 kubelet[1576]: W0515 10:22:20.225363 1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused May 15 10:22:20.225868 kubelet[1576]: E0515 10:22:20.225841 1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" May 15 10:22:20.227372 kubelet[1576]: I0515 10:22:20.227325 1576 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:22:20.227773 kubelet[1576]: W0515 10:22:20.227731 1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused May 15 10:22:20.227886 kubelet[1576]: E0515 10:22:20.227866 1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" May 15 10:22:20.231142 kubelet[1576]: I0515 10:22:20.231067 1576 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:22:20.231816 kubelet[1576]: W0515 10:22:20.231797 1576 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 10:22:20.237234 kubelet[1576]: I0515 10:22:20.237192 1576 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:22:20.237721 kubelet[1576]: I0515 10:22:20.236666 1576 server.go:1269] "Started kubelet" May 15 10:22:20.237979 kubelet[1576]: I0515 10:22:20.237934 1576 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:22:20.238300 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 15 10:22:20.238383 kubelet[1576]: I0515 10:22:20.238366 1576 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:22:20.239302 kubelet[1576]: I0515 10:22:20.238468 1576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:22:20.239763 kubelet[1576]: E0515 10:22:20.239740 1576 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:22:20.239763 kubelet[1576]: I0515 10:22:20.238626 1576 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 10:22:20.240163 kubelet[1576]: I0515 10:22:20.240132 1576 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 10:22:20.240243 kubelet[1576]: I0515 10:22:20.238522 1576 server.go:460] "Adding debug handlers to kubelet server" May 15 10:22:20.240784 kubelet[1576]: W0515 10:22:20.240740 1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused May 15 10:22:20.240843 kubelet[1576]: E0515 10:22:20.240796 1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" May 15 10:22:20.240996 kubelet[1576]: I0515 10:22:20.240983 1576 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 10:22:20.241183 kubelet[1576]: I0515 10:22:20.241053 1576 reconciler.go:26] "Reconciler: start to sync state" May 15 10:22:20.241303 kubelet[1576]: E0515 10:22:20.241277 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:20.241609 kubelet[1576]: E0515 10:22:20.241482 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="200ms" May 15 10:22:20.241907 kubelet[1576]: I0515 10:22:20.241861 1576 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:22:20.242231 kubelet[1576]: E0515 10:22:20.240094 1576 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fac38c31c71aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 10:22:20.234609066 +0000 UTC m=+1.009182575,LastTimestamp:2025-05-15 10:22:20.234609066 +0000 UTC m=+1.009182575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 10:22:20.243661 kubelet[1576]: I0515 10:22:20.243637 1576 factory.go:221] Registration of the containerd container factory successfully May 15 10:22:20.243661 kubelet[1576]: I0515 10:22:20.243659 1576 factory.go:221] Registration of the systemd container factory successfully May 15 10:22:20.258682 kubelet[1576]: I0515 10:22:20.256862 1576 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 10:22:20.258682 kubelet[1576]: I0515 10:22:20.256884 1576 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 10:22:20.258682 kubelet[1576]: I0515 10:22:20.256904 1576 state_mem.go:36] "Initialized new in-memory state store" May 15 10:22:20.260486 kubelet[1576]: I0515 10:22:20.260453 1576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:22:20.261766 kubelet[1576]: I0515 10:22:20.261746 1576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:22:20.261854 kubelet[1576]: I0515 10:22:20.261843 1576 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 10:22:20.261917 kubelet[1576]: I0515 10:22:20.261908 1576 kubelet.go:2321] "Starting kubelet main sync loop" May 15 10:22:20.262013 kubelet[1576]: E0515 10:22:20.261996 1576 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:22:20.262925 kubelet[1576]: W0515 10:22:20.262878 1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused May 15 10:22:20.263079 kubelet[1576]: E0515 10:22:20.263040 1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" May 15 10:22:20.341767 kubelet[1576]: E0515 10:22:20.341719 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:20.351035 kubelet[1576]: I0515 10:22:20.351002 1576 policy_none.go:49] "None policy: Start" May 15 10:22:20.351943 kubelet[1576]: I0515 10:22:20.351907 1576 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 10:22:20.352017 kubelet[1576]: I0515 10:22:20.351953 1576 state_mem.go:35] "Initializing new in-memory state store" May 15 10:22:20.358370 systemd[1]: Created slice kubepods.slice. May 15 10:22:20.362444 kubelet[1576]: E0515 10:22:20.362417 1576 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 10:22:20.362689 systemd[1]: Created slice kubepods-burstable.slice. May 15 10:22:20.365383 systemd[1]: Created slice kubepods-besteffort.slice. May 15 10:22:20.380513 kubelet[1576]: I0515 10:22:20.380468 1576 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:22:20.380645 kubelet[1576]: I0515 10:22:20.380625 1576 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 10:22:20.380712 kubelet[1576]: I0515 10:22:20.380644 1576 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:22:20.381302 kubelet[1576]: I0515 10:22:20.381261 1576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:22:20.382080 kubelet[1576]: E0515 10:22:20.382050 1576 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 10:22:20.442975 kubelet[1576]: E0515 10:22:20.442311 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="400ms" May 15 10:22:20.482615 kubelet[1576]: I0515 10:22:20.482582 1576 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 10:22:20.483003 kubelet[1576]: E0515 10:22:20.482980 1576 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" May 15 10:22:20.570373 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 15 10:22:20.583432 systemd[1]: Created slice kubepods-burstable-podc3f524b9eaecd53cfb091dbd9964d11c.slice. May 15 10:22:20.586917 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 15 10:22:20.642386 kubelet[1576]: I0515 10:22:20.642354 1576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 10:22:20.642600 kubelet[1576]: I0515 10:22:20.642581 1576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3f524b9eaecd53cfb091dbd9964d11c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c3f524b9eaecd53cfb091dbd9964d11c\") " pod="kube-system/kube-apiserver-localhost" May 15 10:22:20.642720 kubelet[1576]: I0515 10:22:20.642702 1576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:22:20.642837 kubelet[1576]: I0515 10:22:20.642822 1576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:22:20.642935 kubelet[1576]: I0515 10:22:20.642919 1576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3f524b9eaecd53cfb091dbd9964d11c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c3f524b9eaecd53cfb091dbd9964d11c\") " pod="kube-system/kube-apiserver-localhost" May 15 10:22:20.643033 kubelet[1576]: I0515 10:22:20.643019 1576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3f524b9eaecd53cfb091dbd9964d11c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c3f524b9eaecd53cfb091dbd9964d11c\") " pod="kube-system/kube-apiserver-localhost" May 15 10:22:20.643126 kubelet[1576]: I0515 10:22:20.643112 1576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:22:20.643210 kubelet[1576]: I0515 10:22:20.643198 1576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:22:20.643302 kubelet[1576]: I0515 10:22:20.643289 1576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:22:20.684632 kubelet[1576]: I0515 10:22:20.684606 1576 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 10:22:20.684978 kubelet[1576]: E0515 10:22:20.684955 1576 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" May 15 10:22:20.843635 kubelet[1576]: E0515 10:22:20.843516 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="800ms" May 15 10:22:20.881971 kubelet[1576]: E0515 10:22:20.881941 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:20.882603 env[1221]: time="2025-05-15T10:22:20.882568177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 15 10:22:20.885865 kubelet[1576]: E0515 10:22:20.885806 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:20.886233 env[1221]: time="2025-05-15T10:22:20.886174891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c3f524b9eaecd53cfb091dbd9964d11c,Namespace:kube-system,Attempt:0,}" May 15 10:22:20.888758 kubelet[1576]: E0515 10:22:20.888737 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:20.889126 env[1221]: time="2025-05-15T10:22:20.889086462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 15 10:22:21.086748 kubelet[1576]: I0515 10:22:21.086716 1576 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 10:22:21.087060 kubelet[1576]: E0515 10:22:21.087038 1576 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" May 15 10:22:21.293105 kubelet[1576]: W0515 10:22:21.292698 1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused May 15 10:22:21.293105 kubelet[1576]: E0515 10:22:21.292768 1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" May 15 10:22:21.323523 kubelet[1576]: W0515 10:22:21.323460 1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused May 15 10:22:21.323657 kubelet[1576]: E0515 10:22:21.323528 1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" May 15 10:22:21.460104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474325719.mount: Deactivated successfully. May 15 10:22:21.465052 env[1221]: time="2025-05-15T10:22:21.465012575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.467136 env[1221]: time="2025-05-15T10:22:21.467095328Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.468514 env[1221]: time="2025-05-15T10:22:21.468479689Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.469525 env[1221]: time="2025-05-15T10:22:21.469503958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.470244 env[1221]: time="2025-05-15T10:22:21.470223383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.471615 env[1221]: time="2025-05-15T10:22:21.471589715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.475082 env[1221]: time="2025-05-15T10:22:21.475053504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.477361 env[1221]: time="2025-05-15T10:22:21.477333410Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.478944 env[1221]: time="2025-05-15T10:22:21.478919052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.479895 env[1221]: time="2025-05-15T10:22:21.479867720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.484141 env[1221]: time="2025-05-15T10:22:21.484101374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.485024 env[1221]: time="2025-05-15T10:22:21.484937183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:21.502144 env[1221]: time="2025-05-15T10:22:21.502084613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:22:21.502289 env[1221]: time="2025-05-15T10:22:21.502132770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:22:21.502289 env[1221]: time="2025-05-15T10:22:21.502160454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:22:21.502517 env[1221]: time="2025-05-15T10:22:21.502457887Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33a21cee88df6928de3ced8884ace0ca284bb2b78cd42a821ea8d3199aa2c91c pid=1624 runtime=io.containerd.runc.v2 May 15 10:22:21.503081 env[1221]: time="2025-05-15T10:22:21.503011487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:22:21.503081 env[1221]: time="2025-05-15T10:22:21.503046743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:22:21.503081 env[1221]: time="2025-05-15T10:22:21.503057000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:22:21.503357 env[1221]: time="2025-05-15T10:22:21.503305715Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0840818ea2bc8ae7b6a19d8dd81db6f4774b10d615746a35bd33c220701bb23c pid=1626 runtime=io.containerd.runc.v2 May 15 10:22:21.508761 env[1221]: time="2025-05-15T10:22:21.507201992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:22:21.508761 env[1221]: time="2025-05-15T10:22:21.507255397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:22:21.508761 env[1221]: time="2025-05-15T10:22:21.507270180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:22:21.508761 env[1221]: time="2025-05-15T10:22:21.507413769Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7b1ec00b5b5d73c51ee9fe051ce187c160019187f69870aa228cf061c5032fa pid=1654 runtime=io.containerd.runc.v2 May 15 10:22:21.518872 systemd[1]: Started cri-containerd-0840818ea2bc8ae7b6a19d8dd81db6f4774b10d615746a35bd33c220701bb23c.scope. May 15 10:22:21.521424 systemd[1]: Started cri-containerd-33a21cee88df6928de3ced8884ace0ca284bb2b78cd42a821ea8d3199aa2c91c.scope. May 15 10:22:21.526160 systemd[1]: Started cri-containerd-f7b1ec00b5b5d73c51ee9fe051ce187c160019187f69870aa228cf061c5032fa.scope. May 15 10:22:21.573821 env[1221]: time="2025-05-15T10:22:21.571864388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c3f524b9eaecd53cfb091dbd9964d11c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0840818ea2bc8ae7b6a19d8dd81db6f4774b10d615746a35bd33c220701bb23c\"" May 15 10:22:21.575177 kubelet[1576]: E0515 10:22:21.574973 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:21.578486 env[1221]: time="2025-05-15T10:22:21.578444853Z" level=info msg="CreateContainer within sandbox \"0840818ea2bc8ae7b6a19d8dd81db6f4774b10d615746a35bd33c220701bb23c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 10:22:21.585275 env[1221]: time="2025-05-15T10:22:21.585226759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"33a21cee88df6928de3ced8884ace0ca284bb2b78cd42a821ea8d3199aa2c91c\"" May 15 10:22:21.586193 kubelet[1576]: E0515 10:22:21.586003 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:21.587741 env[1221]: time="2025-05-15T10:22:21.587704980Z" level=info msg="CreateContainer within sandbox \"33a21cee88df6928de3ced8884ace0ca284bb2b78cd42a821ea8d3199aa2c91c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 10:22:21.591964 env[1221]: time="2025-05-15T10:22:21.591921807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7b1ec00b5b5d73c51ee9fe051ce187c160019187f69870aa228cf061c5032fa\"" May 15 10:22:21.592817 kubelet[1576]: E0515 10:22:21.592598 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:21.594537 env[1221]: time="2025-05-15T10:22:21.594500187Z" level=info msg="CreateContainer within sandbox \"f7b1ec00b5b5d73c51ee9fe051ce187c160019187f69870aa228cf061c5032fa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 10:22:21.594815 env[1221]: time="2025-05-15T10:22:21.594777468Z" level=info msg="CreateContainer within sandbox \"0840818ea2bc8ae7b6a19d8dd81db6f4774b10d615746a35bd33c220701bb23c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3ed8783098d6afff42f16f05b4e3d96d858e6481ea50e7bd8e50f52800554e79\"" May 15 10:22:21.595322 env[1221]: time="2025-05-15T10:22:21.595293689Z" level=info msg="StartContainer for \"3ed8783098d6afff42f16f05b4e3d96d858e6481ea50e7bd8e50f52800554e79\"" May 15 10:22:21.604079 env[1221]: time="2025-05-15T10:22:21.604026537Z" level=info msg="CreateContainer within sandbox \"33a21cee88df6928de3ced8884ace0ca284bb2b78cd42a821ea8d3199aa2c91c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cdaa3d2ca40cff8a6f4a430cba030dca73c662414773108e56d52ab9fa5605f3\"" May 15 10:22:21.605363 env[1221]: time="2025-05-15T10:22:21.605315067Z" level=info msg="StartContainer for \"cdaa3d2ca40cff8a6f4a430cba030dca73c662414773108e56d52ab9fa5605f3\"" May 15 10:22:21.609977 env[1221]: time="2025-05-15T10:22:21.609930767Z" level=info msg="CreateContainer within sandbox \"f7b1ec00b5b5d73c51ee9fe051ce187c160019187f69870aa228cf061c5032fa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"49dc987826d4fcb47bf3c054e263c4e4b7de63a6043098d472c4160f9db78fa4\"" May 15 10:22:21.610765 env[1221]: time="2025-05-15T10:22:21.610731441Z" level=info msg="StartContainer for \"49dc987826d4fcb47bf3c054e263c4e4b7de63a6043098d472c4160f9db78fa4\"" May 15 10:22:21.614202 kubelet[1576]: W0515 10:22:21.614120 1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused May 15 10:22:21.614202 kubelet[1576]: E0515 10:22:21.614170 1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" May 15 10:22:21.614360 systemd[1]: Started cri-containerd-3ed8783098d6afff42f16f05b4e3d96d858e6481ea50e7bd8e50f52800554e79.scope. May 15 10:22:21.626254 systemd[1]: Started cri-containerd-cdaa3d2ca40cff8a6f4a430cba030dca73c662414773108e56d52ab9fa5605f3.scope. May 15 10:22:21.644233 kubelet[1576]: E0515 10:22:21.644190 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="1.6s" May 15 10:22:21.647225 systemd[1]: Started cri-containerd-49dc987826d4fcb47bf3c054e263c4e4b7de63a6043098d472c4160f9db78fa4.scope. May 15 10:22:21.658359 kubelet[1576]: W0515 10:22:21.658250 1576 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused May 15 10:22:21.658359 kubelet[1576]: E0515 10:22:21.658323 1576 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" May 15 10:22:21.726675 env[1221]: time="2025-05-15T10:22:21.726202361Z" level=info msg="StartContainer for \"3ed8783098d6afff42f16f05b4e3d96d858e6481ea50e7bd8e50f52800554e79\" returns successfully" May 15 10:22:21.744365 env[1221]: time="2025-05-15T10:22:21.739742895Z" level=info msg="StartContainer for \"49dc987826d4fcb47bf3c054e263c4e4b7de63a6043098d472c4160f9db78fa4\" returns successfully" May 15 10:22:21.744637 env[1221]: time="2025-05-15T10:22:21.744606991Z" level=info msg="StartContainer for \"cdaa3d2ca40cff8a6f4a430cba030dca73c662414773108e56d52ab9fa5605f3\" returns successfully" May 15 10:22:21.888909 kubelet[1576]: I0515 10:22:21.888815 1576 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 10:22:22.275008 kubelet[1576]: E0515 10:22:22.274917 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:22.277111 kubelet[1576]: E0515 10:22:22.277085 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:22.278409 kubelet[1576]: E0515 10:22:22.278383 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:23.280892 kubelet[1576]: E0515 10:22:23.280748 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:23.280892 kubelet[1576]: E0515 10:22:23.280825 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:23.583594 kubelet[1576]: E0515 10:22:23.583477 1576 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 10:22:23.694036 kubelet[1576]: I0515 10:22:23.693985 1576 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 10:22:23.694036 kubelet[1576]: E0515 10:22:23.694026 1576 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 10:22:23.704459 kubelet[1576]: E0515 10:22:23.704414 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:23.805350 kubelet[1576]: E0515 10:22:23.805298 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:23.905888 kubelet[1576]: E0515 10:22:23.905778 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:24.006325 kubelet[1576]: E0515 10:22:24.006285 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:24.106886 kubelet[1576]: E0515 10:22:24.106801 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:24.207623 kubelet[1576]: E0515 10:22:24.207516 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:24.282642 kubelet[1576]: E0515 10:22:24.282608 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:24.307641 kubelet[1576]: E0515 10:22:24.307612 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:24.407961 kubelet[1576]: E0515 10:22:24.407905 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:24.508529 kubelet[1576]: E0515 10:22:24.508434 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:24.608984 kubelet[1576]: E0515 10:22:24.608935 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:24.709581 kubelet[1576]: E0515 10:22:24.709543 1576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:22:25.227003 kubelet[1576]: I0515 10:22:25.226962 1576 apiserver.go:52] "Watching apiserver" May 15 10:22:25.241882 kubelet[1576]: I0515 10:22:25.241839 1576 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 10:22:25.683493 systemd[1]: Reloading. May 15 10:22:25.729938 /usr/lib/systemd/system-generators/torcx-generator[1876]: time="2025-05-15T10:22:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:22:25.729968 /usr/lib/systemd/system-generators/torcx-generator[1876]: time="2025-05-15T10:22:25Z" level=info msg="torcx already run" May 15 10:22:25.792876 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:22:25.792895 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:22:25.811207 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:22:25.903347 systemd[1]: Stopping kubelet.service... May 15 10:22:25.928289 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:22:25.928498 systemd[1]: Stopped kubelet.service. May 15 10:22:25.928550 systemd[1]: kubelet.service: Consumed 1.330s CPU time. May 15 10:22:25.930322 systemd[1]: Starting kubelet.service... May 15 10:22:26.011444 systemd[1]: Started kubelet.service. May 15 10:22:26.054312 kubelet[1917]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:22:26.054735 kubelet[1917]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 10:22:26.054793 kubelet[1917]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:22:26.054938 kubelet[1917]: I0515 10:22:26.054905 1917 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:22:26.062584 kubelet[1917]: I0515 10:22:26.062543 1917 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 10:22:26.062797 kubelet[1917]: I0515 10:22:26.062785 1917 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:22:26.063134 kubelet[1917]: I0515 10:22:26.063116 1917 server.go:929] "Client rotation is on, will bootstrap in background" May 15 10:22:26.064527 kubelet[1917]: I0515 10:22:26.064502 1917 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 10:22:26.066521 kubelet[1917]: I0515 10:22:26.066492 1917 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:22:26.072150 kubelet[1917]: E0515 10:22:26.072123 1917 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 10:22:26.072300 kubelet[1917]: I0515 10:22:26.072287 1917 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 10:22:26.074880 kubelet[1917]: I0515 10:22:26.074856 1917 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:22:26.075105 kubelet[1917]: I0515 10:22:26.075094 1917 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 10:22:26.075290 kubelet[1917]: I0515 10:22:26.075262 1917 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:22:26.075996 kubelet[1917]: I0515 10:22:26.075343 1917 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 10:22:26.077936 kubelet[1917]: I0515 10:22:26.077904 1917 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:22:26.078104 kubelet[1917]: I0515 10:22:26.078093 1917 container_manager_linux.go:300] "Creating device plugin manager" May 15 10:22:26.078357 kubelet[1917]: I0515 10:22:26.078339 1917 state_mem.go:36] "Initialized new in-memory state store" May 15 10:22:26.078536 kubelet[1917]: I0515 10:22:26.078527 1917 kubelet.go:408] "Attempting to sync node with API server" May 15 10:22:26.078623 kubelet[1917]: I0515 10:22:26.078605 1917 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:22:26.078744 kubelet[1917]: I0515 10:22:26.078733 1917 kubelet.go:314] "Adding apiserver pod source" May 15 10:22:26.082912 kubelet[1917]: I0515 10:22:26.082893 1917 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:22:26.083642 kubelet[1917]: I0515 10:22:26.083615 1917 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:22:26.084266 kubelet[1917]: I0515 10:22:26.084245 1917 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:22:26.084771 kubelet[1917]: I0515 10:22:26.084756 1917 server.go:1269] "Started kubelet" May 15 10:22:26.086601 kubelet[1917]: I0515 10:22:26.086583 1917 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:22:26.096016 kubelet[1917]: I0515 10:22:26.095975 1917 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:22:26.097871 kubelet[1917]: I0515 10:22:26.097524 1917 server.go:460] "Adding debug handlers to kubelet server" May 15 10:22:26.101234 kubelet[1917]: I0515 10:22:26.098527 1917 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:22:26.101234 kubelet[1917]: I0515 10:22:26.099906 1917 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:22:26.101234 kubelet[1917]: I0515 10:22:26.100121 1917 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 10:22:26.101234 kubelet[1917]: I0515 10:22:26.100225 1917 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 10:22:26.101234 kubelet[1917]: I0515 10:22:26.100761 1917 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 10:22:26.101416 kubelet[1917]: I0515 10:22:26.101314 1917 reconciler.go:26] "Reconciler: start to sync state" May 15 10:22:26.102145 kubelet[1917]: I0515 10:22:26.102119 1917 factory.go:221] Registration of the systemd container factory successfully May 15 10:22:26.102237 kubelet[1917]: I0515 10:22:26.102223 1917 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:22:26.105501 kubelet[1917]: E0515 10:22:26.105482 1917 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:22:26.105643 kubelet[1917]: I0515 10:22:26.105516 1917 factory.go:221] Registration of the containerd container factory successfully May 15 10:22:26.107245 kubelet[1917]: I0515 10:22:26.107002 1917 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:22:26.108074 kubelet[1917]: I0515 10:22:26.108042 1917 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:22:26.108074 kubelet[1917]: I0515 10:22:26.108070 1917 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 10:22:26.108178 kubelet[1917]: I0515 10:22:26.108088 1917 kubelet.go:2321] "Starting kubelet main sync loop" May 15 10:22:26.108178 kubelet[1917]: E0515 10:22:26.108133 1917 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:22:26.138574 kubelet[1917]: I0515 10:22:26.138546 1917 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 10:22:26.138761 kubelet[1917]: I0515 10:22:26.138747 1917 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 10:22:26.138844 kubelet[1917]: I0515 10:22:26.138836 1917 state_mem.go:36] "Initialized new in-memory state store" May 15 10:22:26.139048 kubelet[1917]: I0515 10:22:26.139035 1917 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 10:22:26.139134 kubelet[1917]: I0515 10:22:26.139109 1917 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 10:22:26.139194 kubelet[1917]: I0515 10:22:26.139185 1917 policy_none.go:49] "None policy: Start" May 15 10:22:26.139933 kubelet[1917]: I0515 10:22:26.139916 1917 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 10:22:26.140013 kubelet[1917]: I0515 10:22:26.139941 1917 state_mem.go:35] "Initializing new in-memory state store" May 15 10:22:26.140151 kubelet[1917]: I0515 10:22:26.140119 1917 state_mem.go:75] "Updated machine memory state" May 15 10:22:26.144088 kubelet[1917]: I0515 10:22:26.144061 1917 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:22:26.144245 kubelet[1917]: I0515 10:22:26.144222 1917 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 10:22:26.144291 kubelet[1917]: I0515 10:22:26.144239 1917 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:22:26.145224 kubelet[1917]: I0515 10:22:26.145204 1917 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:22:26.249843 kubelet[1917]: I0515 10:22:26.249442 1917 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 10:22:26.256405 kubelet[1917]: I0515 10:22:26.256378 1917 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 10:22:26.256506 kubelet[1917]: I0515 10:22:26.256453 1917 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 10:22:26.302720 kubelet[1917]: I0515 10:22:26.301792 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3f524b9eaecd53cfb091dbd9964d11c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c3f524b9eaecd53cfb091dbd9964d11c\") " pod="kube-system/kube-apiserver-localhost" May 15 10:22:26.302931 kubelet[1917]: I0515 10:22:26.302908 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3f524b9eaecd53cfb091dbd9964d11c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c3f524b9eaecd53cfb091dbd9964d11c\") " pod="kube-system/kube-apiserver-localhost" May 15 10:22:26.303014 kubelet[1917]: I0515 10:22:26.303002 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:22:26.303077 kubelet[1917]: I0515 10:22:26.303066 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:22:26.303142 kubelet[1917]: I0515 10:22:26.303131 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:22:26.303213 kubelet[1917]: I0515 10:22:26.303200 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:22:26.303278 kubelet[1917]: I0515 10:22:26.303266 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 10:22:26.303341 kubelet[1917]: I0515 10:22:26.303330 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3f524b9eaecd53cfb091dbd9964d11c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c3f524b9eaecd53cfb091dbd9964d11c\") " pod="kube-system/kube-apiserver-localhost" May 15 10:22:26.303403 kubelet[1917]: I0515 10:22:26.303391 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:22:26.517541 kubelet[1917]: E0515 10:22:26.517464 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:26.518984 kubelet[1917]: E0515 10:22:26.518914 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:26.519045 kubelet[1917]: E0515 10:22:26.519023 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:26.680600 sudo[1951]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 10:22:26.680850 sudo[1951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 15 10:22:27.084157 kubelet[1917]: I0515 10:22:27.084056 1917 apiserver.go:52] "Watching apiserver" May 15 10:22:27.100956 kubelet[1917]: I0515 10:22:27.100917 1917 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 10:22:27.122322 kubelet[1917]: E0515 10:22:27.122272 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:27.132050 kubelet[1917]: E0515 10:22:27.132010 1917 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 10:22:27.132184 kubelet[1917]: E0515 10:22:27.132174 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:27.143746 kubelet[1917]: E0515 10:22:27.143690 1917 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 10:22:27.143901 kubelet[1917]: E0515 10:22:27.143889 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:27.153953 sudo[1951]: pam_unix(sudo:session): session closed for user root May 15 10:22:27.178530 kubelet[1917]: I0515 10:22:27.178460 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.17844319 podStartE2EDuration="1.17844319s" podCreationTimestamp="2025-05-15 10:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:22:27.171127764 +0000 UTC m=+1.154893327" watchObservedRunningTime="2025-05-15 10:22:27.17844319 +0000 UTC m=+1.162208753" May 15 10:22:27.188049 kubelet[1917]: I0515 10:22:27.187985 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.187968394 podStartE2EDuration="1.187968394s" podCreationTimestamp="2025-05-15 10:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:22:27.179133203 +0000 UTC m=+1.162898766" watchObservedRunningTime="2025-05-15 10:22:27.187968394 +0000 UTC m=+1.171733917" May 15 10:22:27.196566 kubelet[1917]: I0515 10:22:27.196516 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.196494605 podStartE2EDuration="1.196494605s" podCreationTimestamp="2025-05-15 10:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:22:27.188334576 +0000 UTC m=+1.172100099" watchObservedRunningTime="2025-05-15 10:22:27.196494605 +0000 UTC m=+1.180260168" May 15 10:22:28.123255 kubelet[1917]: E0515 10:22:28.123197 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:28.123653 kubelet[1917]: E0515 10:22:28.123327 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:29.286642 sudo[1320]: pam_unix(sudo:session): session closed for user root May 15 10:22:29.288148 sshd[1317]: pam_unix(sshd:session): session closed for user core May 15 10:22:29.290500 systemd[1]: sshd@4-10.0.0.110:22-10.0.0.1:41546.service: Deactivated successfully. May 15 10:22:29.291328 systemd[1]: session-5.scope: Deactivated successfully. May 15 10:22:29.291539 systemd[1]: session-5.scope: Consumed 7.007s CPU time. May 15 10:22:29.291924 systemd-logind[1207]: Session 5 logged out. Waiting for processes to exit. May 15 10:22:29.292617 systemd-logind[1207]: Removed session 5. May 15 10:22:31.468033 kubelet[1917]: I0515 10:22:31.467996 1917 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 10:22:31.468390 env[1221]: time="2025-05-15T10:22:31.468299140Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 10:22:31.468576 kubelet[1917]: I0515 10:22:31.468476 1917 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 10:22:32.145241 systemd[1]: Created slice kubepods-besteffort-pod24a2fd52_eb1f_4294_8453_8bed5ecd6dc1.slice. May 15 10:22:32.149293 kubelet[1917]: W0515 10:22:32.149255 1917 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:22:32.149405 kubelet[1917]: E0515 10:22:32.149305 1917 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 10:22:32.149779 kubelet[1917]: W0515 10:22:32.149758 1917 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:22:32.149823 kubelet[1917]: E0515 10:22:32.149785 1917 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 10:22:32.156966 systemd[1]: Created slice kubepods-burstable-podaacf5c52_8891_4638_b518_1068ca37a946.slice. May 15 10:22:32.246116 kubelet[1917]: I0515 10:22:32.246056 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-etc-cni-netd\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246116 kubelet[1917]: I0515 10:22:32.246121 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aacf5c52-8891-4638-b518-1068ca37a946-clustermesh-secrets\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246288 kubelet[1917]: I0515 10:22:32.246139 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aacf5c52-8891-4638-b518-1068ca37a946-cilium-config-path\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246288 kubelet[1917]: I0515 10:22:32.246155 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cilium-run\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246288 kubelet[1917]: I0515 10:22:32.246173 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-host-proc-sys-net\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246288 kubelet[1917]: I0515 10:22:32.246202 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/24a2fd52-eb1f-4294-8453-8bed5ecd6dc1-kube-proxy\") pod \"kube-proxy-4mkq6\" (UID: \"24a2fd52-eb1f-4294-8453-8bed5ecd6dc1\") " pod="kube-system/kube-proxy-4mkq6" May 15 10:22:32.246288 kubelet[1917]: I0515 10:22:32.246222 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24a2fd52-eb1f-4294-8453-8bed5ecd6dc1-lib-modules\") pod \"kube-proxy-4mkq6\" (UID: \"24a2fd52-eb1f-4294-8453-8bed5ecd6dc1\") " pod="kube-system/kube-proxy-4mkq6" May 15 10:22:32.246402 kubelet[1917]: I0515 10:22:32.246238 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrn9z\" (UniqueName: \"kubernetes.io/projected/24a2fd52-eb1f-4294-8453-8bed5ecd6dc1-kube-api-access-xrn9z\") pod \"kube-proxy-4mkq6\" (UID: \"24a2fd52-eb1f-4294-8453-8bed5ecd6dc1\") " pod="kube-system/kube-proxy-4mkq6" May 15 10:22:32.246402 kubelet[1917]: I0515 10:22:32.246253 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cilium-cgroup\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246402 kubelet[1917]: I0515 10:22:32.246281 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-lib-modules\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246402 kubelet[1917]: I0515 10:22:32.246303 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cni-path\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246490 kubelet[1917]: I0515 10:22:32.246388 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-hostproc\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246490 kubelet[1917]: I0515 10:22:32.246431 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-host-proc-sys-kernel\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246490 kubelet[1917]: I0515 10:22:32.246453 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-xtables-lock\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246490 kubelet[1917]: I0515 10:22:32.246473 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krjg\" (UniqueName: \"kubernetes.io/projected/aacf5c52-8891-4638-b518-1068ca37a946-kube-api-access-7krjg\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246576 kubelet[1917]: I0515 10:22:32.246505 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24a2fd52-eb1f-4294-8453-8bed5ecd6dc1-xtables-lock\") pod \"kube-proxy-4mkq6\" (UID: \"24a2fd52-eb1f-4294-8453-8bed5ecd6dc1\") " pod="kube-system/kube-proxy-4mkq6" May 15 10:22:32.246576 kubelet[1917]: I0515 10:22:32.246522 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-bpf-maps\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.246576 kubelet[1917]: I0515 10:22:32.246537 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aacf5c52-8891-4638-b518-1068ca37a946-hubble-tls\") pod \"cilium-gjwkx\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " pod="kube-system/cilium-gjwkx" May 15 10:22:32.355920 kubelet[1917]: I0515 10:22:32.355877 1917 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 15 10:22:32.455107 kubelet[1917]: E0515 10:22:32.454993 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:32.455740 env[1221]: time="2025-05-15T10:22:32.455698889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4mkq6,Uid:24a2fd52-eb1f-4294-8453-8bed5ecd6dc1,Namespace:kube-system,Attempt:0,}" May 15 10:22:32.468585 env[1221]: time="2025-05-15T10:22:32.468442325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:22:32.468585 env[1221]: time="2025-05-15T10:22:32.468481635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:22:32.468585 env[1221]: time="2025-05-15T10:22:32.468492163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:22:32.469147 env[1221]: time="2025-05-15T10:22:32.469113634Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36bedb1b15be6e593d1362739f5e424cda1a038bb255585a84b851e2b3398fe6 pid=2006 runtime=io.containerd.runc.v2 May 15 10:22:32.487004 systemd[1]: Started cri-containerd-36bedb1b15be6e593d1362739f5e424cda1a038bb255585a84b851e2b3398fe6.scope. May 15 10:22:32.516957 env[1221]: time="2025-05-15T10:22:32.516909645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4mkq6,Uid:24a2fd52-eb1f-4294-8453-8bed5ecd6dc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"36bedb1b15be6e593d1362739f5e424cda1a038bb255585a84b851e2b3398fe6\"" May 15 10:22:32.517893 kubelet[1917]: E0515 10:22:32.517871 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:32.520392 env[1221]: time="2025-05-15T10:22:32.520359024Z" level=info msg="CreateContainer within sandbox \"36bedb1b15be6e593d1362739f5e424cda1a038bb255585a84b851e2b3398fe6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 10:22:32.554486 systemd[1]: Created slice kubepods-besteffort-pode3d52a4b_e5b2_4aec_97fd_0e98a9fa226e.slice. May 15 10:22:32.573138 env[1221]: time="2025-05-15T10:22:32.573091462Z" level=info msg="CreateContainer within sandbox \"36bedb1b15be6e593d1362739f5e424cda1a038bb255585a84b851e2b3398fe6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bcecf9b26ef7fdebc4a6565c53cce62aa73777cbf517dbe900cab83fd58ba206\"" May 15 10:22:32.575884 env[1221]: time="2025-05-15T10:22:32.575845393Z" level=info msg="StartContainer for \"bcecf9b26ef7fdebc4a6565c53cce62aa73777cbf517dbe900cab83fd58ba206\"" May 15 10:22:32.593577 systemd[1]: Started cri-containerd-bcecf9b26ef7fdebc4a6565c53cce62aa73777cbf517dbe900cab83fd58ba206.scope. May 15 10:22:32.628406 env[1221]: time="2025-05-15T10:22:32.627566704Z" level=info msg="StartContainer for \"bcecf9b26ef7fdebc4a6565c53cce62aa73777cbf517dbe900cab83fd58ba206\" returns successfully" May 15 10:22:32.653452 kubelet[1917]: I0515 10:22:32.653409 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e-cilium-config-path\") pod \"cilium-operator-5d85765b45-mf9z8\" (UID: \"e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e\") " pod="kube-system/cilium-operator-5d85765b45-mf9z8" May 15 10:22:32.653452 kubelet[1917]: I0515 10:22:32.653452 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njtz2\" (UniqueName: \"kubernetes.io/projected/e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e-kube-api-access-njtz2\") pod \"cilium-operator-5d85765b45-mf9z8\" (UID: \"e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e\") " pod="kube-system/cilium-operator-5d85765b45-mf9z8" May 15 10:22:32.856598 kubelet[1917]: E0515 10:22:32.856476 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:32.857028 env[1221]: time="2025-05-15T10:22:32.856986176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mf9z8,Uid:e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e,Namespace:kube-system,Attempt:0,}" May 15 10:22:32.871193 env[1221]: time="2025-05-15T10:22:32.871123070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:22:32.871193 env[1221]: time="2025-05-15T10:22:32.871164782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:22:32.871375 env[1221]: time="2025-05-15T10:22:32.871175270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:22:32.871606 env[1221]: time="2025-05-15T10:22:32.871569849Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b pid=2125 runtime=io.containerd.runc.v2 May 15 10:22:32.882353 systemd[1]: Started cri-containerd-37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b.scope. May 15 10:22:32.923741 env[1221]: time="2025-05-15T10:22:32.923536266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mf9z8,Uid:e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e,Namespace:kube-system,Attempt:0,} returns sandbox id \"37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b\"" May 15 10:22:32.924155 kubelet[1917]: E0515 10:22:32.924131 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:32.925433 env[1221]: time="2025-05-15T10:22:32.925399041Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 10:22:33.132778 kubelet[1917]: E0515 10:22:33.132631 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:33.348468 kubelet[1917]: E0515 10:22:33.348361 1917 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 15 10:22:33.348468 kubelet[1917]: E0515 10:22:33.348392 1917 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-gjwkx: failed to sync secret cache: timed out waiting for the condition May 15 10:22:33.348468 kubelet[1917]: E0515 10:22:33.348476 1917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aacf5c52-8891-4638-b518-1068ca37a946-hubble-tls podName:aacf5c52-8891-4638-b518-1068ca37a946 nodeName:}" failed. No retries permitted until 2025-05-15 10:22:33.848450178 +0000 UTC m=+7.832215701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/aacf5c52-8891-4638-b518-1068ca37a946-hubble-tls") pod "cilium-gjwkx" (UID: "aacf5c52-8891-4638-b518-1068ca37a946") : failed to sync secret cache: timed out waiting for the condition May 15 10:22:33.364041 systemd[1]: run-containerd-runc-k8s.io-36bedb1b15be6e593d1362739f5e424cda1a038bb255585a84b851e2b3398fe6-runc.UpfeN1.mount: Deactivated successfully. May 15 10:22:33.430548 kubelet[1917]: E0515 10:22:33.430440 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:33.446524 kubelet[1917]: I0515 10:22:33.446338 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4mkq6" podStartSLOduration=1.446323176 podStartE2EDuration="1.446323176s" podCreationTimestamp="2025-05-15 10:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:22:33.141664291 +0000 UTC m=+7.125429854" watchObservedRunningTime="2025-05-15 10:22:33.446323176 +0000 UTC m=+7.430088739" May 15 10:22:33.625457 kubelet[1917]: E0515 10:22:33.625423 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:33.958996 kubelet[1917]: E0515 10:22:33.958960 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:33.959658 env[1221]: time="2025-05-15T10:22:33.959624922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjwkx,Uid:aacf5c52-8891-4638-b518-1068ca37a946,Namespace:kube-system,Attempt:0,}" May 15 10:22:33.977547 env[1221]: time="2025-05-15T10:22:33.977484233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:22:33.977547 env[1221]: time="2025-05-15T10:22:33.977523301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:22:33.977708 env[1221]: time="2025-05-15T10:22:33.977659199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:22:33.977955 env[1221]: time="2025-05-15T10:22:33.977909339Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656 pid=2258 runtime=io.containerd.runc.v2 May 15 10:22:33.993019 systemd[1]: Started cri-containerd-36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656.scope. May 15 10:22:34.021797 env[1221]: time="2025-05-15T10:22:34.021747026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjwkx,Uid:aacf5c52-8891-4638-b518-1068ca37a946,Namespace:kube-system,Attempt:0,} returns sandbox id \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\"" May 15 10:22:34.022690 kubelet[1917]: E0515 10:22:34.022339 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:34.135687 kubelet[1917]: E0515 10:22:34.135638 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:34.136745 kubelet[1917]: E0515 10:22:34.135911 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:34.655194 env[1221]: time="2025-05-15T10:22:34.655146699Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:34.657342 env[1221]: time="2025-05-15T10:22:34.657300404Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:34.658688 env[1221]: time="2025-05-15T10:22:34.658639715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:34.659089 env[1221]: time="2025-05-15T10:22:34.659041989Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 10:22:34.660451 env[1221]: time="2025-05-15T10:22:34.660421887Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 10:22:34.662709 env[1221]: time="2025-05-15T10:22:34.662665133Z" level=info msg="CreateContainer within sandbox \"37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 10:22:34.672298 env[1221]: time="2025-05-15T10:22:34.672256657Z" level=info msg="CreateContainer within sandbox \"37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\"" May 15 10:22:34.672698 env[1221]: time="2025-05-15T10:22:34.672608817Z" level=info msg="StartContainer for \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\"" May 15 10:22:34.690033 systemd[1]: Started cri-containerd-f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1.scope. May 15 10:22:34.723802 env[1221]: time="2025-05-15T10:22:34.723755286Z" level=info msg="StartContainer for \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\" returns successfully" May 15 10:22:35.138026 kubelet[1917]: E0515 10:22:35.137906 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:35.148024 kubelet[1917]: I0515 10:22:35.147978 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-mf9z8" podStartSLOduration=1.412771432 podStartE2EDuration="3.147961023s" podCreationTimestamp="2025-05-15 10:22:32 +0000 UTC" firstStartedPulling="2025-05-15 10:22:32.925004781 +0000 UTC m=+6.908770344" lastFinishedPulling="2025-05-15 10:22:34.660194372 +0000 UTC m=+8.643959935" observedRunningTime="2025-05-15 10:22:35.147581698 +0000 UTC m=+9.131347261" watchObservedRunningTime="2025-05-15 10:22:35.147961023 +0000 UTC m=+9.131726546" May 15 10:22:35.227317 kubelet[1917]: E0515 10:22:35.227286 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:35.360047 systemd[1]: run-containerd-runc-k8s.io-f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1-runc.EZBD8f.mount: Deactivated successfully. May 15 10:22:36.138784 kubelet[1917]: E0515 10:22:36.138748 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:36.139845 kubelet[1917]: E0515 10:22:36.138875 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:39.905148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811572717.mount: Deactivated successfully. May 15 10:22:40.989766 update_engine[1211]: I0515 10:22:40.989718 1211 update_attempter.cc:509] Updating boot flags... May 15 10:22:42.160884 env[1221]: time="2025-05-15T10:22:42.160830742Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:42.162750 env[1221]: time="2025-05-15T10:22:42.162709025Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:42.164751 env[1221]: time="2025-05-15T10:22:42.164718326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:22:42.165285 env[1221]: time="2025-05-15T10:22:42.165249605Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 10:22:42.177748 env[1221]: time="2025-05-15T10:22:42.177678861Z" level=info msg="CreateContainer within sandbox \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:22:42.187160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3802742547.mount: Deactivated successfully. May 15 10:22:42.191122 env[1221]: time="2025-05-15T10:22:42.191074390Z" level=info msg="CreateContainer within sandbox \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e\"" May 15 10:22:42.191698 env[1221]: time="2025-05-15T10:22:42.191655291Z" level=info msg="StartContainer for \"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e\"" May 15 10:22:42.212877 systemd[1]: Started cri-containerd-35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e.scope. May 15 10:22:42.298088 env[1221]: time="2025-05-15T10:22:42.298027414Z" level=info msg="StartContainer for \"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e\" returns successfully" May 15 10:22:42.328875 systemd[1]: cri-containerd-35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e.scope: Deactivated successfully. May 15 10:22:42.382354 env[1221]: time="2025-05-15T10:22:42.382290898Z" level=info msg="shim disconnected" id=35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e May 15 10:22:42.382354 env[1221]: time="2025-05-15T10:22:42.382341280Z" level=warning msg="cleaning up after shim disconnected" id=35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e namespace=k8s.io May 15 10:22:42.382354 env[1221]: time="2025-05-15T10:22:42.382356807Z" level=info msg="cleaning up dead shim" May 15 10:22:42.389242 env[1221]: time="2025-05-15T10:22:42.389181949Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:22:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2397 runtime=io.containerd.runc.v2\n" May 15 10:22:43.151916 kubelet[1917]: E0515 10:22:43.151876 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:43.155405 env[1221]: time="2025-05-15T10:22:43.155333839Z" level=info msg="CreateContainer within sandbox \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:22:43.172078 env[1221]: time="2025-05-15T10:22:43.172002920Z" level=info msg="CreateContainer within sandbox \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d\"" May 15 10:22:43.172730 env[1221]: time="2025-05-15T10:22:43.172700058Z" level=info msg="StartContainer for \"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d\"" May 15 10:22:43.185546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e-rootfs.mount: Deactivated successfully. May 15 10:22:43.195872 systemd[1]: run-containerd-runc-k8s.io-f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d-runc.derWUO.mount: Deactivated successfully. May 15 10:22:43.197368 systemd[1]: Started cri-containerd-f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d.scope. May 15 10:22:43.229715 env[1221]: time="2025-05-15T10:22:43.228187643Z" level=info msg="StartContainer for \"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d\" returns successfully" May 15 10:22:43.244496 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:22:43.244723 systemd[1]: Stopped systemd-sysctl.service. May 15 10:22:43.244902 systemd[1]: Stopping systemd-sysctl.service... May 15 10:22:43.246414 systemd[1]: Starting systemd-sysctl.service... May 15 10:22:43.248441 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 10:22:43.251403 systemd[1]: cri-containerd-f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d.scope: Deactivated successfully. May 15 10:22:43.256347 systemd[1]: Finished systemd-sysctl.service. May 15 10:22:43.276203 env[1221]: time="2025-05-15T10:22:43.276156256Z" level=info msg="shim disconnected" id=f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d May 15 10:22:43.276203 env[1221]: time="2025-05-15T10:22:43.276202556Z" level=warning msg="cleaning up after shim disconnected" id=f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d namespace=k8s.io May 15 10:22:43.276451 env[1221]: time="2025-05-15T10:22:43.276212360Z" level=info msg="cleaning up dead shim" May 15 10:22:43.283532 env[1221]: time="2025-05-15T10:22:43.283490029Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:22:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2462 runtime=io.containerd.runc.v2\n" May 15 10:22:44.153699 kubelet[1917]: E0515 10:22:44.153523 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:44.159110 env[1221]: time="2025-05-15T10:22:44.159061800Z" level=info msg="CreateContainer within sandbox \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:22:44.176113 env[1221]: time="2025-05-15T10:22:44.176051197Z" level=info msg="CreateContainer within sandbox \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced\"" May 15 10:22:44.177392 env[1221]: time="2025-05-15T10:22:44.176708224Z" level=info msg="StartContainer for \"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced\"" May 15 10:22:44.185281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d-rootfs.mount: Deactivated successfully. May 15 10:22:44.195197 systemd[1]: run-containerd-runc-k8s.io-105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced-runc.HlqN4q.mount: Deactivated successfully. May 15 10:22:44.196591 systemd[1]: Started cri-containerd-105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced.scope. May 15 10:22:44.248059 env[1221]: time="2025-05-15T10:22:44.247971118Z" level=info msg="StartContainer for \"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced\" returns successfully" May 15 10:22:44.251544 systemd[1]: cri-containerd-105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced.scope: Deactivated successfully. May 15 10:22:44.271066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced-rootfs.mount: Deactivated successfully. May 15 10:22:44.277761 env[1221]: time="2025-05-15T10:22:44.277719310Z" level=info msg="shim disconnected" id=105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced May 15 10:22:44.277761 env[1221]: time="2025-05-15T10:22:44.277761727Z" level=warning msg="cleaning up after shim disconnected" id=105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced namespace=k8s.io May 15 10:22:44.277992 env[1221]: time="2025-05-15T10:22:44.277770331Z" level=info msg="cleaning up dead shim" May 15 10:22:44.284774 env[1221]: time="2025-05-15T10:22:44.284735486Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:22:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2521 runtime=io.containerd.runc.v2\n" May 15 10:22:45.157081 kubelet[1917]: E0515 10:22:45.157047 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:45.159085 env[1221]: time="2025-05-15T10:22:45.159025433Z" level=info msg="CreateContainer within sandbox \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:22:45.195488 env[1221]: time="2025-05-15T10:22:45.195441334Z" level=info msg="CreateContainer within sandbox \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8\"" May 15 10:22:45.196473 env[1221]: time="2025-05-15T10:22:45.196442763Z" level=info msg="StartContainer for \"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8\"" May 15 10:22:45.214151 systemd[1]: Started cri-containerd-aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8.scope. May 15 10:22:45.237540 systemd[1]: cri-containerd-aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8.scope: Deactivated successfully. May 15 10:22:45.239648 env[1221]: time="2025-05-15T10:22:45.239597280Z" level=info msg="StartContainer for \"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8\" returns successfully" May 15 10:22:45.252269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8-rootfs.mount: Deactivated successfully. May 15 10:22:45.256516 env[1221]: time="2025-05-15T10:22:45.256465470Z" level=info msg="shim disconnected" id=aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8 May 15 10:22:45.256516 env[1221]: time="2025-05-15T10:22:45.256513209Z" level=warning msg="cleaning up after shim disconnected" id=aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8 namespace=k8s.io May 15 10:22:45.256655 env[1221]: time="2025-05-15T10:22:45.256524733Z" level=info msg="cleaning up dead shim" May 15 10:22:45.262738 env[1221]: time="2025-05-15T10:22:45.262665998Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:22:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2576 runtime=io.containerd.runc.v2\n" May 15 10:22:46.161258 kubelet[1917]: E0515 10:22:46.160323 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:46.162996 env[1221]: time="2025-05-15T10:22:46.162960686Z" level=info msg="CreateContainer within sandbox \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:22:46.176506 env[1221]: time="2025-05-15T10:22:46.174965175Z" level=info msg="CreateContainer within sandbox \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\"" May 15 10:22:46.176506 env[1221]: time="2025-05-15T10:22:46.175747865Z" level=info msg="StartContainer for \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\"" May 15 10:22:46.195919 systemd[1]: Started cri-containerd-82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a.scope. May 15 10:22:46.238207 env[1221]: time="2025-05-15T10:22:46.236041134Z" level=info msg="StartContainer for \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\" returns successfully" May 15 10:22:46.253167 systemd[1]: run-containerd-runc-k8s.io-82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a-runc.SYDtUw.mount: Deactivated successfully. May 15 10:22:46.402203 kubelet[1917]: I0515 10:22:46.402176 1917 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 10:22:46.431438 systemd[1]: Created slice kubepods-burstable-pod5fc18892_1118_4ff8_ad37_add1a13614c5.slice. May 15 10:22:46.435927 systemd[1]: Created slice kubepods-burstable-pode7bcf14b_3837_4afa_94a3_04e4ef52e73d.slice. May 15 10:22:46.510705 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 15 10:22:46.550473 kubelet[1917]: I0515 10:22:46.550438 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmlwn\" (UniqueName: \"kubernetes.io/projected/5fc18892-1118-4ff8-ad37-add1a13614c5-kube-api-access-gmlwn\") pod \"coredns-6f6b679f8f-29rdm\" (UID: \"5fc18892-1118-4ff8-ad37-add1a13614c5\") " pod="kube-system/coredns-6f6b679f8f-29rdm" May 15 10:22:46.550473 kubelet[1917]: I0515 10:22:46.550476 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdt9g\" (UniqueName: \"kubernetes.io/projected/e7bcf14b-3837-4afa-94a3-04e4ef52e73d-kube-api-access-kdt9g\") pod \"coredns-6f6b679f8f-vthck\" (UID: \"e7bcf14b-3837-4afa-94a3-04e4ef52e73d\") " pod="kube-system/coredns-6f6b679f8f-vthck" May 15 10:22:46.550662 kubelet[1917]: I0515 10:22:46.550494 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7bcf14b-3837-4afa-94a3-04e4ef52e73d-config-volume\") pod \"coredns-6f6b679f8f-vthck\" (UID: \"e7bcf14b-3837-4afa-94a3-04e4ef52e73d\") " pod="kube-system/coredns-6f6b679f8f-vthck" May 15 10:22:46.550662 kubelet[1917]: I0515 10:22:46.550519 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc18892-1118-4ff8-ad37-add1a13614c5-config-volume\") pod \"coredns-6f6b679f8f-29rdm\" (UID: \"5fc18892-1118-4ff8-ad37-add1a13614c5\") " pod="kube-system/coredns-6f6b679f8f-29rdm" May 15 10:22:46.734964 kubelet[1917]: E0515 10:22:46.734920 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:46.735783 env[1221]: time="2025-05-15T10:22:46.735736511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-29rdm,Uid:5fc18892-1118-4ff8-ad37-add1a13614c5,Namespace:kube-system,Attempt:0,}" May 15 10:22:46.739031 kubelet[1917]: E0515 10:22:46.738997 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:46.739505 env[1221]: time="2025-05-15T10:22:46.739460571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vthck,Uid:e7bcf14b-3837-4afa-94a3-04e4ef52e73d,Namespace:kube-system,Attempt:0,}" May 15 10:22:46.797633 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 15 10:22:47.164072 kubelet[1917]: E0515 10:22:47.163966 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:47.178655 kubelet[1917]: I0515 10:22:47.178600 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gjwkx" podStartSLOduration=7.035554569 podStartE2EDuration="15.178585153s" podCreationTimestamp="2025-05-15 10:22:32 +0000 UTC" firstStartedPulling="2025-05-15 10:22:34.023226953 +0000 UTC m=+8.006992516" lastFinishedPulling="2025-05-15 10:22:42.166257537 +0000 UTC m=+16.150023100" observedRunningTime="2025-05-15 10:22:47.17801175 +0000 UTC m=+21.161777313" watchObservedRunningTime="2025-05-15 10:22:47.178585153 +0000 UTC m=+21.162350716" May 15 10:22:48.166129 kubelet[1917]: E0515 10:22:48.166059 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:48.407874 systemd-networkd[1044]: cilium_host: Link UP May 15 10:22:48.407977 systemd-networkd[1044]: cilium_net: Link UP May 15 10:22:48.410163 systemd-networkd[1044]: cilium_net: Gained carrier May 15 10:22:48.410967 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 15 10:22:48.411040 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 15 10:22:48.411286 systemd-networkd[1044]: cilium_host: Gained carrier May 15 10:22:48.411435 systemd-networkd[1044]: cilium_net: Gained IPv6LL May 15 10:22:48.411564 systemd-networkd[1044]: cilium_host: Gained IPv6LL May 15 10:22:48.491869 systemd-networkd[1044]: cilium_vxlan: Link UP May 15 10:22:48.491877 systemd-networkd[1044]: cilium_vxlan: Gained carrier May 15 10:22:48.834696 kernel: NET: Registered PF_ALG protocol family May 15 10:22:49.168276 kubelet[1917]: E0515 10:22:49.168225 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:49.409432 systemd-networkd[1044]: lxc_health: Link UP May 15 10:22:49.416460 systemd-networkd[1044]: lxc_health: Gained carrier May 15 10:22:49.416727 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:22:49.831995 systemd-networkd[1044]: lxcae1bc4bf2434: Link UP May 15 10:22:49.846692 kernel: eth0: renamed from tmp462dd May 15 10:22:49.855231 systemd-networkd[1044]: lxc677a49a56eb7: Link UP May 15 10:22:49.868698 kernel: eth0: renamed from tmpca2e3 May 15 10:22:49.877278 systemd-networkd[1044]: lxcae1bc4bf2434: Gained carrier May 15 10:22:49.877681 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcae1bc4bf2434: link becomes ready May 15 10:22:49.879212 systemd-networkd[1044]: lxc677a49a56eb7: Gained carrier May 15 10:22:49.879684 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc677a49a56eb7: link becomes ready May 15 10:22:50.170581 kubelet[1917]: E0515 10:22:50.170172 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:50.184077 systemd-networkd[1044]: cilium_vxlan: Gained IPv6LL May 15 10:22:51.171422 kubelet[1917]: E0515 10:22:51.171384 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:51.336144 systemd-networkd[1044]: lxc677a49a56eb7: Gained IPv6LL May 15 10:22:51.399828 systemd-networkd[1044]: lxc_health: Gained IPv6LL May 15 10:22:51.848057 systemd-networkd[1044]: lxcae1bc4bf2434: Gained IPv6LL May 15 10:22:52.711482 systemd[1]: Started sshd@5-10.0.0.110:22-10.0.0.1:51588.service. May 15 10:22:52.754096 sshd[3129]: Accepted publickey for core from 10.0.0.1 port 51588 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:22:52.756369 sshd[3129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:22:52.760639 systemd-logind[1207]: New session 6 of user core. May 15 10:22:52.761249 systemd[1]: Started session-6.scope. May 15 10:22:52.881865 sshd[3129]: pam_unix(sshd:session): session closed for user core May 15 10:22:52.885624 systemd[1]: sshd@5-10.0.0.110:22-10.0.0.1:51588.service: Deactivated successfully. May 15 10:22:52.886356 systemd[1]: session-6.scope: Deactivated successfully. May 15 10:22:52.887375 systemd-logind[1207]: Session 6 logged out. Waiting for processes to exit. May 15 10:22:52.888187 systemd-logind[1207]: Removed session 6. May 15 10:22:53.431168 env[1221]: time="2025-05-15T10:22:53.430501565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:22:53.431168 env[1221]: time="2025-05-15T10:22:53.430533774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:22:53.431168 env[1221]: time="2025-05-15T10:22:53.430543216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:22:53.431168 env[1221]: time="2025-05-15T10:22:53.430709742Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca2e3bc652d3ef94184493fa05319986569070f4044e5d71af5268e442341373 pid=3165 runtime=io.containerd.runc.v2 May 15 10:22:53.431168 env[1221]: time="2025-05-15T10:22:53.428767129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:22:53.431168 env[1221]: time="2025-05-15T10:22:53.428805019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:22:53.431168 env[1221]: time="2025-05-15T10:22:53.428815502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:22:53.431168 env[1221]: time="2025-05-15T10:22:53.428990710Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/462dd4dec8fef1d4f3022cb3c2f0c08360e1bbe5b6f8f828414ebdb9c549a67e pid=3164 runtime=io.containerd.runc.v2 May 15 10:22:53.450293 systemd[1]: Started cri-containerd-462dd4dec8fef1d4f3022cb3c2f0c08360e1bbe5b6f8f828414ebdb9c549a67e.scope. May 15 10:22:53.462599 systemd[1]: Started cri-containerd-ca2e3bc652d3ef94184493fa05319986569070f4044e5d71af5268e442341373.scope. May 15 10:22:53.517060 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:22:53.521791 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:22:53.539087 env[1221]: time="2025-05-15T10:22:53.539044670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-29rdm,Uid:5fc18892-1118-4ff8-ad37-add1a13614c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca2e3bc652d3ef94184493fa05319986569070f4044e5d71af5268e442341373\"" May 15 10:22:53.539233 env[1221]: time="2025-05-15T10:22:53.539065195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vthck,Uid:e7bcf14b-3837-4afa-94a3-04e4ef52e73d,Namespace:kube-system,Attempt:0,} returns sandbox id \"462dd4dec8fef1d4f3022cb3c2f0c08360e1bbe5b6f8f828414ebdb9c549a67e\"" May 15 10:22:53.540729 kubelet[1917]: E0515 10:22:53.539828 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:53.540729 kubelet[1917]: E0515 10:22:53.540016 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:53.542859 env[1221]: time="2025-05-15T10:22:53.542807382Z" level=info msg="CreateContainer within sandbox \"ca2e3bc652d3ef94184493fa05319986569070f4044e5d71af5268e442341373\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:22:53.543595 env[1221]: time="2025-05-15T10:22:53.543561069Z" level=info msg="CreateContainer within sandbox \"462dd4dec8fef1d4f3022cb3c2f0c08360e1bbe5b6f8f828414ebdb9c549a67e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:22:53.561425 env[1221]: time="2025-05-15T10:22:53.561376638Z" level=info msg="CreateContainer within sandbox \"ca2e3bc652d3ef94184493fa05319986569070f4044e5d71af5268e442341373\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a43df2ecb27c3b144e5b2f46f1fa4dec009875ad3a3e5a9cdb4af8e90f1c31d2\"" May 15 10:22:53.562252 env[1221]: time="2025-05-15T10:22:53.562218029Z" level=info msg="StartContainer for \"a43df2ecb27c3b144e5b2f46f1fa4dec009875ad3a3e5a9cdb4af8e90f1c31d2\"" May 15 10:22:53.563151 env[1221]: time="2025-05-15T10:22:53.563097670Z" level=info msg="CreateContainer within sandbox \"462dd4dec8fef1d4f3022cb3c2f0c08360e1bbe5b6f8f828414ebdb9c549a67e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e31dca55b237a9301e9ddccd32b1e6e0bebb037418aed675dadc2dea8226bf0\"" May 15 10:22:53.563448 env[1221]: time="2025-05-15T10:22:53.563424000Z" level=info msg="StartContainer for \"6e31dca55b237a9301e9ddccd32b1e6e0bebb037418aed675dadc2dea8226bf0\"" May 15 10:22:53.579026 systemd[1]: Started cri-containerd-a43df2ecb27c3b144e5b2f46f1fa4dec009875ad3a3e5a9cdb4af8e90f1c31d2.scope. May 15 10:22:53.583466 systemd[1]: Started cri-containerd-6e31dca55b237a9301e9ddccd32b1e6e0bebb037418aed675dadc2dea8226bf0.scope. May 15 10:22:53.621406 env[1221]: time="2025-05-15T10:22:53.621342333Z" level=info msg="StartContainer for \"a43df2ecb27c3b144e5b2f46f1fa4dec009875ad3a3e5a9cdb4af8e90f1c31d2\" returns successfully" May 15 10:22:53.625108 env[1221]: time="2025-05-15T10:22:53.625024423Z" level=info msg="StartContainer for \"6e31dca55b237a9301e9ddccd32b1e6e0bebb037418aed675dadc2dea8226bf0\" returns successfully" May 15 10:22:54.177493 kubelet[1917]: E0515 10:22:54.177465 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:54.178214 kubelet[1917]: E0515 10:22:54.178182 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:54.214494 kubelet[1917]: I0515 10:22:54.214442 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vthck" podStartSLOduration=22.214424781 podStartE2EDuration="22.214424781s" podCreationTimestamp="2025-05-15 10:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:22:54.197828041 +0000 UTC m=+28.181593604" watchObservedRunningTime="2025-05-15 10:22:54.214424781 +0000 UTC m=+28.198190344" May 15 10:22:54.230684 kubelet[1917]: I0515 10:22:54.230611 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-29rdm" podStartSLOduration=22.230595928 podStartE2EDuration="22.230595928s" podCreationTimestamp="2025-05-15 10:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:22:54.219738783 +0000 UTC m=+28.203504386" watchObservedRunningTime="2025-05-15 10:22:54.230595928 +0000 UTC m=+28.214361491" May 15 10:22:54.435678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539930218.mount: Deactivated successfully. May 15 10:22:55.180988 kubelet[1917]: E0515 10:22:55.180047 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:55.180988 kubelet[1917]: E0515 10:22:55.180182 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:56.181533 kubelet[1917]: E0515 10:22:56.181408 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:56.181533 kubelet[1917]: E0515 10:22:56.181440 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:22:57.886001 systemd[1]: Started sshd@6-10.0.0.110:22-10.0.0.1:51598.service. May 15 10:22:57.925858 sshd[3318]: Accepted publickey for core from 10.0.0.1 port 51598 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:22:57.927434 sshd[3318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:22:57.931128 systemd-logind[1207]: New session 7 of user core. May 15 10:22:57.931574 systemd[1]: Started session-7.scope. May 15 10:22:58.058476 sshd[3318]: pam_unix(sshd:session): session closed for user core May 15 10:22:58.061247 systemd[1]: session-7.scope: Deactivated successfully. May 15 10:22:58.061908 systemd-logind[1207]: Session 7 logged out. Waiting for processes to exit. May 15 10:22:58.062033 systemd[1]: sshd@6-10.0.0.110:22-10.0.0.1:51598.service: Deactivated successfully. May 15 10:22:58.063228 systemd-logind[1207]: Removed session 7. May 15 10:23:03.061921 systemd[1]: Started sshd@7-10.0.0.110:22-10.0.0.1:60010.service. May 15 10:23:03.103593 sshd[3336]: Accepted publickey for core from 10.0.0.1 port 60010 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:03.104930 sshd[3336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:03.109146 systemd-logind[1207]: New session 8 of user core. May 15 10:23:03.109583 systemd[1]: Started session-8.scope. May 15 10:23:03.220203 sshd[3336]: pam_unix(sshd:session): session closed for user core May 15 10:23:03.222732 systemd[1]: sshd@7-10.0.0.110:22-10.0.0.1:60010.service: Deactivated successfully. May 15 10:23:03.223458 systemd[1]: session-8.scope: Deactivated successfully. May 15 10:23:03.223980 systemd-logind[1207]: Session 8 logged out. Waiting for processes to exit. May 15 10:23:03.224639 systemd-logind[1207]: Removed session 8. May 15 10:23:08.226595 systemd[1]: Started sshd@8-10.0.0.110:22-10.0.0.1:60014.service. May 15 10:23:08.267575 sshd[3350]: Accepted publickey for core from 10.0.0.1 port 60014 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:08.268871 sshd[3350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:08.274805 systemd-logind[1207]: New session 9 of user core. May 15 10:23:08.275724 systemd[1]: Started session-9.scope. May 15 10:23:08.398964 sshd[3350]: pam_unix(sshd:session): session closed for user core May 15 10:23:08.408638 systemd[1]: Started sshd@9-10.0.0.110:22-10.0.0.1:60030.service. May 15 10:23:08.411430 systemd[1]: sshd@8-10.0.0.110:22-10.0.0.1:60014.service: Deactivated successfully. May 15 10:23:08.412168 systemd[1]: session-9.scope: Deactivated successfully. May 15 10:23:08.414565 systemd-logind[1207]: Session 9 logged out. Waiting for processes to exit. May 15 10:23:08.415665 systemd-logind[1207]: Removed session 9. May 15 10:23:08.447756 sshd[3363]: Accepted publickey for core from 10.0.0.1 port 60030 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:08.449020 sshd[3363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:08.453521 systemd[1]: Started session-10.scope. May 15 10:23:08.455148 systemd-logind[1207]: New session 10 of user core. May 15 10:23:08.653769 sshd[3363]: pam_unix(sshd:session): session closed for user core May 15 10:23:08.654938 systemd[1]: Started sshd@10-10.0.0.110:22-10.0.0.1:60034.service. May 15 10:23:08.660907 systemd[1]: sshd@9-10.0.0.110:22-10.0.0.1:60030.service: Deactivated successfully. May 15 10:23:08.661736 systemd[1]: session-10.scope: Deactivated successfully. May 15 10:23:08.662428 systemd-logind[1207]: Session 10 logged out. Waiting for processes to exit. May 15 10:23:08.666208 systemd-logind[1207]: Removed session 10. May 15 10:23:08.703275 sshd[3374]: Accepted publickey for core from 10.0.0.1 port 60034 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:08.705028 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:08.708971 systemd-logind[1207]: New session 11 of user core. May 15 10:23:08.709900 systemd[1]: Started session-11.scope. May 15 10:23:08.821404 sshd[3374]: pam_unix(sshd:session): session closed for user core May 15 10:23:08.823895 systemd[1]: sshd@10-10.0.0.110:22-10.0.0.1:60034.service: Deactivated successfully. May 15 10:23:08.824630 systemd[1]: session-11.scope: Deactivated successfully. May 15 10:23:08.825356 systemd-logind[1207]: Session 11 logged out. Waiting for processes to exit. May 15 10:23:08.826117 systemd-logind[1207]: Removed session 11. May 15 10:23:13.827216 systemd[1]: Started sshd@11-10.0.0.110:22-10.0.0.1:49236.service. May 15 10:23:13.866792 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 49236 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:13.867948 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:13.871059 systemd-logind[1207]: New session 12 of user core. May 15 10:23:13.872030 systemd[1]: Started session-12.scope. May 15 10:23:13.979387 sshd[3390]: pam_unix(sshd:session): session closed for user core May 15 10:23:13.981832 systemd[1]: sshd@11-10.0.0.110:22-10.0.0.1:49236.service: Deactivated successfully. May 15 10:23:13.982575 systemd[1]: session-12.scope: Deactivated successfully. May 15 10:23:13.983089 systemd-logind[1207]: Session 12 logged out. Waiting for processes to exit. May 15 10:23:13.983759 systemd-logind[1207]: Removed session 12. May 15 10:23:18.984403 systemd[1]: Started sshd@12-10.0.0.110:22-10.0.0.1:49242.service. May 15 10:23:19.021522 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 49242 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:19.023503 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:19.026732 systemd-logind[1207]: New session 13 of user core. May 15 10:23:19.027520 systemd[1]: Started session-13.scope. May 15 10:23:19.131824 sshd[3406]: pam_unix(sshd:session): session closed for user core May 15 10:23:19.134582 systemd[1]: sshd@12-10.0.0.110:22-10.0.0.1:49242.service: Deactivated successfully. May 15 10:23:19.135189 systemd[1]: session-13.scope: Deactivated successfully. May 15 10:23:19.135747 systemd-logind[1207]: Session 13 logged out. Waiting for processes to exit. May 15 10:23:19.136959 systemd[1]: Started sshd@13-10.0.0.110:22-10.0.0.1:49244.service. May 15 10:23:19.137637 systemd-logind[1207]: Removed session 13. May 15 10:23:19.174709 sshd[3419]: Accepted publickey for core from 10.0.0.1 port 49244 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:19.176212 sshd[3419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:19.179331 systemd-logind[1207]: New session 14 of user core. May 15 10:23:19.180184 systemd[1]: Started session-14.scope. May 15 10:23:19.364516 sshd[3419]: pam_unix(sshd:session): session closed for user core May 15 10:23:19.369382 systemd[1]: Started sshd@14-10.0.0.110:22-10.0.0.1:49256.service. May 15 10:23:19.370281 systemd[1]: sshd@13-10.0.0.110:22-10.0.0.1:49244.service: Deactivated successfully. May 15 10:23:19.371231 systemd[1]: session-14.scope: Deactivated successfully. May 15 10:23:19.371954 systemd-logind[1207]: Session 14 logged out. Waiting for processes to exit. May 15 10:23:19.372628 systemd-logind[1207]: Removed session 14. May 15 10:23:19.408593 sshd[3429]: Accepted publickey for core from 10.0.0.1 port 49256 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:19.410047 sshd[3429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:19.413486 systemd-logind[1207]: New session 15 of user core. May 15 10:23:19.414903 systemd[1]: Started session-15.scope. May 15 10:23:20.679076 sshd[3429]: pam_unix(sshd:session): session closed for user core May 15 10:23:20.684526 systemd[1]: Started sshd@15-10.0.0.110:22-10.0.0.1:49262.service. May 15 10:23:20.685078 systemd[1]: sshd@14-10.0.0.110:22-10.0.0.1:49256.service: Deactivated successfully. May 15 10:23:20.685882 systemd[1]: session-15.scope: Deactivated successfully. May 15 10:23:20.687347 systemd-logind[1207]: Session 15 logged out. Waiting for processes to exit. May 15 10:23:20.699890 systemd-logind[1207]: Removed session 15. May 15 10:23:20.724326 sshd[3450]: Accepted publickey for core from 10.0.0.1 port 49262 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:20.725599 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:20.728694 systemd-logind[1207]: New session 16 of user core. May 15 10:23:20.729561 systemd[1]: Started session-16.scope. May 15 10:23:20.948154 sshd[3450]: pam_unix(sshd:session): session closed for user core May 15 10:23:20.951737 systemd[1]: Started sshd@16-10.0.0.110:22-10.0.0.1:49278.service. May 15 10:23:20.952263 systemd[1]: sshd@15-10.0.0.110:22-10.0.0.1:49262.service: Deactivated successfully. May 15 10:23:20.953025 systemd[1]: session-16.scope: Deactivated successfully. May 15 10:23:20.954055 systemd-logind[1207]: Session 16 logged out. Waiting for processes to exit. May 15 10:23:20.955039 systemd-logind[1207]: Removed session 16. May 15 10:23:20.991622 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 49278 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:20.992794 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:20.996356 systemd-logind[1207]: New session 17 of user core. May 15 10:23:20.996865 systemd[1]: Started session-17.scope. May 15 10:23:21.111868 sshd[3463]: pam_unix(sshd:session): session closed for user core May 15 10:23:21.114907 systemd[1]: sshd@16-10.0.0.110:22-10.0.0.1:49278.service: Deactivated successfully. May 15 10:23:21.115625 systemd[1]: session-17.scope: Deactivated successfully. May 15 10:23:21.116142 systemd-logind[1207]: Session 17 logged out. Waiting for processes to exit. May 15 10:23:21.116801 systemd-logind[1207]: Removed session 17. May 15 10:23:26.117901 systemd[1]: Started sshd@17-10.0.0.110:22-10.0.0.1:40976.service. May 15 10:23:26.156175 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 40976 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:26.157962 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:26.161722 systemd-logind[1207]: New session 18 of user core. May 15 10:23:26.162701 systemd[1]: Started session-18.scope. May 15 10:23:26.269559 sshd[3482]: pam_unix(sshd:session): session closed for user core May 15 10:23:26.271985 systemd[1]: sshd@17-10.0.0.110:22-10.0.0.1:40976.service: Deactivated successfully. May 15 10:23:26.272751 systemd[1]: session-18.scope: Deactivated successfully. May 15 10:23:26.273276 systemd-logind[1207]: Session 18 logged out. Waiting for processes to exit. May 15 10:23:26.273918 systemd-logind[1207]: Removed session 18. May 15 10:23:31.274324 systemd[1]: Started sshd@18-10.0.0.110:22-10.0.0.1:40978.service. May 15 10:23:31.311402 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 40978 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:31.313072 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:31.316216 systemd-logind[1207]: New session 19 of user core. May 15 10:23:31.317121 systemd[1]: Started session-19.scope. May 15 10:23:31.420806 sshd[3497]: pam_unix(sshd:session): session closed for user core May 15 10:23:31.423148 systemd[1]: sshd@18-10.0.0.110:22-10.0.0.1:40978.service: Deactivated successfully. May 15 10:23:31.423885 systemd[1]: session-19.scope: Deactivated successfully. May 15 10:23:31.424361 systemd-logind[1207]: Session 19 logged out. Waiting for processes to exit. May 15 10:23:31.425093 systemd-logind[1207]: Removed session 19. May 15 10:23:36.425178 systemd[1]: Started sshd@19-10.0.0.110:22-10.0.0.1:56770.service. May 15 10:23:36.463168 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 56770 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:36.464766 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:36.468112 systemd-logind[1207]: New session 20 of user core. May 15 10:23:36.469130 systemd[1]: Started session-20.scope. May 15 10:23:36.573523 sshd[3512]: pam_unix(sshd:session): session closed for user core May 15 10:23:36.575810 systemd[1]: sshd@19-10.0.0.110:22-10.0.0.1:56770.service: Deactivated successfully. May 15 10:23:36.576537 systemd[1]: session-20.scope: Deactivated successfully. May 15 10:23:36.577055 systemd-logind[1207]: Session 20 logged out. Waiting for processes to exit. May 15 10:23:36.577834 systemd-logind[1207]: Removed session 20. May 15 10:23:41.577906 systemd[1]: Started sshd@20-10.0.0.110:22-10.0.0.1:56778.service. May 15 10:23:41.615706 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 56778 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:41.616993 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:41.620226 systemd-logind[1207]: New session 21 of user core. May 15 10:23:41.621097 systemd[1]: Started session-21.scope. May 15 10:23:41.726082 sshd[3525]: pam_unix(sshd:session): session closed for user core May 15 10:23:41.729994 systemd[1]: Started sshd@21-10.0.0.110:22-10.0.0.1:56786.service. May 15 10:23:41.730468 systemd[1]: sshd@20-10.0.0.110:22-10.0.0.1:56778.service: Deactivated successfully. May 15 10:23:41.731134 systemd[1]: session-21.scope: Deactivated successfully. May 15 10:23:41.731631 systemd-logind[1207]: Session 21 logged out. Waiting for processes to exit. May 15 10:23:41.732594 systemd-logind[1207]: Removed session 21. May 15 10:23:41.768094 sshd[3538]: Accepted publickey for core from 10.0.0.1 port 56786 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:41.769243 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:41.772308 systemd-logind[1207]: New session 22 of user core. May 15 10:23:41.773247 systemd[1]: Started session-22.scope. May 15 10:23:42.108908 kubelet[1917]: E0515 10:23:42.108867 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:43.109293 kubelet[1917]: E0515 10:23:43.109261 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:44.224234 env[1221]: time="2025-05-15T10:23:44.224186049Z" level=info msg="StopContainer for \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\" with timeout 30 (s)" May 15 10:23:44.225816 env[1221]: time="2025-05-15T10:23:44.225780689Z" level=info msg="Stop container \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\" with signal terminated" May 15 10:23:44.235983 systemd[1]: cri-containerd-f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1.scope: Deactivated successfully. May 15 10:23:44.260389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1-rootfs.mount: Deactivated successfully. May 15 10:23:44.267233 env[1221]: time="2025-05-15T10:23:44.267174679Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:23:44.269321 env[1221]: time="2025-05-15T10:23:44.269287252Z" level=info msg="shim disconnected" id=f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1 May 15 10:23:44.269394 env[1221]: time="2025-05-15T10:23:44.269320930Z" level=warning msg="cleaning up after shim disconnected" id=f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1 namespace=k8s.io May 15 10:23:44.269394 env[1221]: time="2025-05-15T10:23:44.269330570Z" level=info msg="cleaning up dead shim" May 15 10:23:44.272152 env[1221]: time="2025-05-15T10:23:44.272119389Z" level=info msg="StopContainer for \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\" with timeout 2 (s)" May 15 10:23:44.272380 env[1221]: time="2025-05-15T10:23:44.272354577Z" level=info msg="Stop container \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\" with signal terminated" May 15 10:23:44.276442 env[1221]: time="2025-05-15T10:23:44.276406813Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:23:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3585 runtime=io.containerd.runc.v2\n" May 15 10:23:44.278064 systemd-networkd[1044]: lxc_health: Link DOWN May 15 10:23:44.278071 systemd-networkd[1044]: lxc_health: Lost carrier May 15 10:23:44.279221 env[1221]: time="2025-05-15T10:23:44.279176193Z" level=info msg="StopContainer for \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\" returns successfully" May 15 10:23:44.279773 env[1221]: time="2025-05-15T10:23:44.279743044Z" level=info msg="StopPodSandbox for \"37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b\"" May 15 10:23:44.279838 env[1221]: time="2025-05-15T10:23:44.279807321Z" level=info msg="Container to stop \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:23:44.281562 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b-shm.mount: Deactivated successfully. May 15 10:23:44.287533 systemd[1]: cri-containerd-37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b.scope: Deactivated successfully. May 15 10:23:44.306320 systemd[1]: cri-containerd-82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a.scope: Deactivated successfully. May 15 10:23:44.306666 systemd[1]: cri-containerd-82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a.scope: Consumed 6.416s CPU time. May 15 10:23:44.315041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b-rootfs.mount: Deactivated successfully. May 15 10:23:44.321135 env[1221]: time="2025-05-15T10:23:44.321089077Z" level=info msg="shim disconnected" id=37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b May 15 10:23:44.321135 env[1221]: time="2025-05-15T10:23:44.321134434Z" level=warning msg="cleaning up after shim disconnected" id=37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b namespace=k8s.io May 15 10:23:44.321308 env[1221]: time="2025-05-15T10:23:44.321144754Z" level=info msg="cleaning up dead shim" May 15 10:23:44.330695 env[1221]: time="2025-05-15T10:23:44.329060794Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:23:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3633 runtime=io.containerd.runc.v2\n" May 15 10:23:44.330695 env[1221]: time="2025-05-15T10:23:44.329372218Z" level=info msg="TearDown network for sandbox \"37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b\" successfully" May 15 10:23:44.330695 env[1221]: time="2025-05-15T10:23:44.329394737Z" level=info msg="StopPodSandbox for \"37003611223565f543e74184ff3a4bec8d7468ccf73038fc44c42f65725b211b\" returns successfully" May 15 10:23:44.330821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a-rootfs.mount: Deactivated successfully. May 15 10:23:44.348658 env[1221]: time="2025-05-15T10:23:44.348603527Z" level=info msg="shim disconnected" id=82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a May 15 10:23:44.348921 env[1221]: time="2025-05-15T10:23:44.348660965Z" level=warning msg="cleaning up after shim disconnected" id=82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a namespace=k8s.io May 15 10:23:44.348921 env[1221]: time="2025-05-15T10:23:44.348710762Z" level=info msg="cleaning up dead shim" May 15 10:23:44.355365 env[1221]: time="2025-05-15T10:23:44.355308589Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:23:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3651 runtime=io.containerd.runc.v2\n" May 15 10:23:44.357144 env[1221]: time="2025-05-15T10:23:44.357104578Z" level=info msg="StopContainer for \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\" returns successfully" May 15 10:23:44.357523 env[1221]: time="2025-05-15T10:23:44.357498678Z" level=info msg="StopPodSandbox for \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\"" May 15 10:23:44.357575 env[1221]: time="2025-05-15T10:23:44.357556315Z" level=info msg="Container to stop \"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:23:44.357619 env[1221]: time="2025-05-15T10:23:44.357570675Z" level=info msg="Container to stop \"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:23:44.357619 env[1221]: time="2025-05-15T10:23:44.357589114Z" level=info msg="Container to stop \"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:23:44.357619 env[1221]: time="2025-05-15T10:23:44.357601953Z" level=info msg="Container to stop \"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:23:44.357619 env[1221]: time="2025-05-15T10:23:44.357612033Z" level=info msg="Container to stop \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:23:44.362661 systemd[1]: cri-containerd-36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656.scope: Deactivated successfully. May 15 10:23:44.390902 kubelet[1917]: I0515 10:23:44.390856 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e-cilium-config-path\") pod \"e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e\" (UID: \"e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e\") " May 15 10:23:44.390902 kubelet[1917]: I0515 10:23:44.390908 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njtz2\" (UniqueName: \"kubernetes.io/projected/e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e-kube-api-access-njtz2\") pod \"e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e\" (UID: \"e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e\") " May 15 10:23:44.394073 kubelet[1917]: I0515 10:23:44.394034 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e" (UID: "e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 10:23:44.395036 env[1221]: time="2025-05-15T10:23:44.394954267Z" level=info msg="shim disconnected" id=36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656 May 15 10:23:44.395036 env[1221]: time="2025-05-15T10:23:44.395010984Z" level=warning msg="cleaning up after shim disconnected" id=36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656 namespace=k8s.io May 15 10:23:44.395036 env[1221]: time="2025-05-15T10:23:44.395021944Z" level=info msg="cleaning up dead shim" May 15 10:23:44.399838 kubelet[1917]: I0515 10:23:44.399789 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e-kube-api-access-njtz2" (OuterVolumeSpecName: "kube-api-access-njtz2") pod "e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e" (UID: "e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e"). InnerVolumeSpecName "kube-api-access-njtz2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:23:44.403813 env[1221]: time="2025-05-15T10:23:44.403755063Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:23:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3682 runtime=io.containerd.runc.v2\n" May 15 10:23:44.404106 env[1221]: time="2025-05-15T10:23:44.404064727Z" level=info msg="TearDown network for sandbox \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" successfully" May 15 10:23:44.404106 env[1221]: time="2025-05-15T10:23:44.404096806Z" level=info msg="StopPodSandbox for \"36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656\" returns successfully" May 15 10:23:44.493483 kubelet[1917]: I0515 10:23:44.491159 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-hostproc\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493483 kubelet[1917]: I0515 10:23:44.491207 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cilium-run\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493483 kubelet[1917]: I0515 10:23:44.491226 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cilium-cgroup\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493483 kubelet[1917]: I0515 10:23:44.491253 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-lib-modules\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493483 kubelet[1917]: I0515 10:23:44.491271 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-etc-cni-netd\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493483 kubelet[1917]: I0515 10:23:44.491288 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-host-proc-sys-kernel\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493816 kubelet[1917]: I0515 10:23:44.491312 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aacf5c52-8891-4638-b518-1068ca37a946-hubble-tls\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493816 kubelet[1917]: I0515 10:23:44.491335 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-bpf-maps\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493816 kubelet[1917]: I0515 10:23:44.491351 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-xtables-lock\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493816 kubelet[1917]: I0515 10:23:44.491369 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aacf5c52-8891-4638-b518-1068ca37a946-clustermesh-secrets\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493816 kubelet[1917]: I0515 10:23:44.491386 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7krjg\" (UniqueName: \"kubernetes.io/projected/aacf5c52-8891-4638-b518-1068ca37a946-kube-api-access-7krjg\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493816 kubelet[1917]: I0515 10:23:44.491411 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aacf5c52-8891-4638-b518-1068ca37a946-cilium-config-path\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493953 kubelet[1917]: I0515 10:23:44.491426 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-host-proc-sys-net\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493953 kubelet[1917]: I0515 10:23:44.491440 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cni-path\") pod \"aacf5c52-8891-4638-b518-1068ca37a946\" (UID: \"aacf5c52-8891-4638-b518-1068ca37a946\") " May 15 10:23:44.493953 kubelet[1917]: I0515 10:23:44.491470 1917 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-njtz2\" (UniqueName: \"kubernetes.io/projected/e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e-kube-api-access-njtz2\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.493953 kubelet[1917]: I0515 10:23:44.491488 1917 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.493953 kubelet[1917]: I0515 10:23:44.491538 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cni-path" (OuterVolumeSpecName: "cni-path") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:44.493953 kubelet[1917]: I0515 10:23:44.491591 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-hostproc" (OuterVolumeSpecName: "hostproc") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:44.494086 kubelet[1917]: I0515 10:23:44.491606 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:44.494086 kubelet[1917]: I0515 10:23:44.491621 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:44.494086 kubelet[1917]: I0515 10:23:44.491646 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:44.494086 kubelet[1917]: I0515 10:23:44.491660 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:44.494086 kubelet[1917]: I0515 10:23:44.491701 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:44.494197 kubelet[1917]: I0515 10:23:44.492051 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:44.494197 kubelet[1917]: I0515 10:23:44.492083 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:44.494197 kubelet[1917]: I0515 10:23:44.492403 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:44.494197 kubelet[1917]: I0515 10:23:44.493702 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aacf5c52-8891-4638-b518-1068ca37a946-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 10:23:44.495683 kubelet[1917]: I0515 10:23:44.495622 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aacf5c52-8891-4638-b518-1068ca37a946-kube-api-access-7krjg" (OuterVolumeSpecName: "kube-api-access-7krjg") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "kube-api-access-7krjg". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:23:44.495764 kubelet[1917]: I0515 10:23:44.495741 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aacf5c52-8891-4638-b518-1068ca37a946-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 10:23:44.495850 kubelet[1917]: I0515 10:23:44.495813 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aacf5c52-8891-4638-b518-1068ca37a946-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aacf5c52-8891-4638-b518-1068ca37a946" (UID: "aacf5c52-8891-4638-b518-1068ca37a946"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:23:44.592371 kubelet[1917]: I0515 10:23:44.592328 1917 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592371 kubelet[1917]: I0515 10:23:44.592364 1917 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592371 kubelet[1917]: I0515 10:23:44.592373 1917 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592371 kubelet[1917]: I0515 10:23:44.592381 1917 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592619 kubelet[1917]: I0515 10:23:44.592393 1917 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592619 kubelet[1917]: I0515 10:23:44.592401 1917 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aacf5c52-8891-4638-b518-1068ca37a946-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592619 kubelet[1917]: I0515 10:23:44.592408 1917 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592619 kubelet[1917]: I0515 10:23:44.592416 1917 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aacf5c52-8891-4638-b518-1068ca37a946-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592619 kubelet[1917]: I0515 10:23:44.592424 1917 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7krjg\" (UniqueName: \"kubernetes.io/projected/aacf5c52-8891-4638-b518-1068ca37a946-kube-api-access-7krjg\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592619 kubelet[1917]: I0515 10:23:44.592431 1917 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aacf5c52-8891-4638-b518-1068ca37a946-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592619 kubelet[1917]: I0515 10:23:44.592438 1917 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592619 kubelet[1917]: I0515 10:23:44.592446 1917 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592822 kubelet[1917]: I0515 10:23:44.592453 1917 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 10:23:44.592822 kubelet[1917]: I0515 10:23:44.592460 1917 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aacf5c52-8891-4638-b518-1068ca37a946-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 10:23:45.238065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656-rootfs.mount: Deactivated successfully. May 15 10:23:45.238150 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36071abe0c792175b2400ee6e82c55b8bcbefbc41c4c7c2b5c06ff1ee1ed3656-shm.mount: Deactivated successfully. May 15 10:23:45.238206 systemd[1]: var-lib-kubelet-pods-aacf5c52\x2d8891\x2d4638\x2db518\x2d1068ca37a946-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 10:23:45.238262 systemd[1]: var-lib-kubelet-pods-aacf5c52\x2d8891\x2d4638\x2db518\x2d1068ca37a946-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 10:23:45.238311 systemd[1]: var-lib-kubelet-pods-e3d52a4b\x2de5b2\x2d4aec\x2d97fd\x2d0e98a9fa226e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnjtz2.mount: Deactivated successfully. May 15 10:23:45.238360 systemd[1]: var-lib-kubelet-pods-aacf5c52\x2d8891\x2d4638\x2db518\x2d1068ca37a946-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7krjg.mount: Deactivated successfully. May 15 10:23:45.266114 kubelet[1917]: I0515 10:23:45.266075 1917 scope.go:117] "RemoveContainer" containerID="f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1" May 15 10:23:45.267297 env[1221]: time="2025-05-15T10:23:45.267263438Z" level=info msg="RemoveContainer for \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\"" May 15 10:23:45.269338 systemd[1]: Removed slice kubepods-besteffort-pode3d52a4b_e5b2_4aec_97fd_0e98a9fa226e.slice. May 15 10:23:45.272862 systemd[1]: Removed slice kubepods-burstable-podaacf5c52_8891_4638_b518_1068ca37a946.slice. May 15 10:23:45.272943 systemd[1]: kubepods-burstable-podaacf5c52_8891_4638_b518_1068ca37a946.slice: Consumed 6.627s CPU time. May 15 10:23:45.273524 env[1221]: time="2025-05-15T10:23:45.273485389Z" level=info msg="RemoveContainer for \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\" returns successfully" May 15 10:23:45.273784 kubelet[1917]: I0515 10:23:45.273757 1917 scope.go:117] "RemoveContainer" containerID="f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1" May 15 10:23:45.274005 env[1221]: time="2025-05-15T10:23:45.273945848Z" level=error msg="ContainerStatus for \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\": not found" May 15 10:23:45.274912 kubelet[1917]: E0515 10:23:45.274884 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\": not found" containerID="f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1" May 15 10:23:45.274998 kubelet[1917]: I0515 10:23:45.274921 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1"} err="failed to get container status \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0be04765ffca3a37f3756d552e26ce0aee42c4b97ce18261e2f93961b744ed1\": not found" May 15 10:23:45.275045 kubelet[1917]: I0515 10:23:45.275000 1917 scope.go:117] "RemoveContainer" containerID="82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a" May 15 10:23:45.275967 env[1221]: time="2025-05-15T10:23:45.275941436Z" level=info msg="RemoveContainer for \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\"" May 15 10:23:45.278259 env[1221]: time="2025-05-15T10:23:45.278231369Z" level=info msg="RemoveContainer for \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\" returns successfully" May 15 10:23:45.278481 kubelet[1917]: I0515 10:23:45.278464 1917 scope.go:117] "RemoveContainer" containerID="aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8" May 15 10:23:45.279369 env[1221]: time="2025-05-15T10:23:45.279347958Z" level=info msg="RemoveContainer for \"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8\"" May 15 10:23:45.282497 env[1221]: time="2025-05-15T10:23:45.282469493Z" level=info msg="RemoveContainer for \"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8\" returns successfully" May 15 10:23:45.282756 kubelet[1917]: I0515 10:23:45.282740 1917 scope.go:117] "RemoveContainer" containerID="105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced" May 15 10:23:45.283902 env[1221]: time="2025-05-15T10:23:45.283873908Z" level=info msg="RemoveContainer for \"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced\"" May 15 10:23:45.287509 env[1221]: time="2025-05-15T10:23:45.287476861Z" level=info msg="RemoveContainer for \"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced\" returns successfully" May 15 10:23:45.287674 kubelet[1917]: I0515 10:23:45.287652 1917 scope.go:117] "RemoveContainer" containerID="f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d" May 15 10:23:45.289231 env[1221]: time="2025-05-15T10:23:45.289205381Z" level=info msg="RemoveContainer for \"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d\"" May 15 10:23:45.291368 env[1221]: time="2025-05-15T10:23:45.291340202Z" level=info msg="RemoveContainer for \"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d\" returns successfully" May 15 10:23:45.291688 kubelet[1917]: I0515 10:23:45.291654 1917 scope.go:117] "RemoveContainer" containerID="35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e" May 15 10:23:45.293321 env[1221]: time="2025-05-15T10:23:45.293298311Z" level=info msg="RemoveContainer for \"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e\"" May 15 10:23:45.296278 env[1221]: time="2025-05-15T10:23:45.296251214Z" level=info msg="RemoveContainer for \"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e\" returns successfully" May 15 10:23:45.296549 kubelet[1917]: I0515 10:23:45.296518 1917 scope.go:117] "RemoveContainer" containerID="82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a" May 15 10:23:45.296865 env[1221]: time="2025-05-15T10:23:45.296814548Z" level=error msg="ContainerStatus for \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\": not found" May 15 10:23:45.297069 kubelet[1917]: E0515 10:23:45.297047 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\": not found" containerID="82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a" May 15 10:23:45.297129 kubelet[1917]: I0515 10:23:45.297079 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a"} err="failed to get container status \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"82759e875190cf8f910ac7a8afcee2748e8295ee97fa1b5e8ec8f284d616bb0a\": not found" May 15 10:23:45.297129 kubelet[1917]: I0515 10:23:45.297100 1917 scope.go:117] "RemoveContainer" containerID="aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8" May 15 10:23:45.297310 env[1221]: time="2025-05-15T10:23:45.297267727Z" level=error msg="ContainerStatus for \"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8\": not found" May 15 10:23:45.297429 kubelet[1917]: E0515 10:23:45.297409 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8\": not found" containerID="aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8" May 15 10:23:45.297502 kubelet[1917]: I0515 10:23:45.297484 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8"} err="failed to get container status \"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"aacb0fbe79841692c0120ab0ea2590b87e744dc798fd5b3fa916e69acae7e4b8\": not found" May 15 10:23:45.297574 kubelet[1917]: I0515 10:23:45.297560 1917 scope.go:117] "RemoveContainer" containerID="105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced" May 15 10:23:45.297858 env[1221]: time="2025-05-15T10:23:45.297806502Z" level=error msg="ContainerStatus for \"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced\": not found" May 15 10:23:45.297995 kubelet[1917]: E0515 10:23:45.297977 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced\": not found" containerID="105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced" May 15 10:23:45.298040 kubelet[1917]: I0515 10:23:45.298000 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced"} err="failed to get container status \"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced\": rpc error: code = NotFound desc = an error occurred when try to find container \"105def8ccaf2ccfa123147770f7aa69efaa7a13d12dcd130916b6b07fbddbced\": not found" May 15 10:23:45.298040 kubelet[1917]: I0515 10:23:45.298026 1917 scope.go:117] "RemoveContainer" containerID="f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d" May 15 10:23:45.298194 env[1221]: time="2025-05-15T10:23:45.298156366Z" level=error msg="ContainerStatus for \"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d\": not found" May 15 10:23:45.298300 kubelet[1917]: E0515 10:23:45.298285 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d\": not found" containerID="f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d" May 15 10:23:45.298336 kubelet[1917]: I0515 10:23:45.298303 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d"} err="failed to get container status \"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3da6b53898c08e9577ca9366682776cb64ac2c671cfd36c491bbb43a9ff120d\": not found" May 15 10:23:45.298336 kubelet[1917]: I0515 10:23:45.298315 1917 scope.go:117] "RemoveContainer" containerID="35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e" May 15 10:23:45.298492 env[1221]: time="2025-05-15T10:23:45.298446792Z" level=error msg="ContainerStatus for \"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e\": not found" May 15 10:23:45.298621 kubelet[1917]: E0515 10:23:45.298605 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e\": not found" containerID="35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e" May 15 10:23:45.298708 kubelet[1917]: I0515 10:23:45.298689 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e"} err="failed to get container status \"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e\": rpc error: code = NotFound desc = an error occurred when try to find container \"35426b4e71f0118d0782c8224aec97e45fe5cfe644ed2808880c70874306fe5e\": not found" May 15 10:23:46.110860 kubelet[1917]: I0515 10:23:46.110811 1917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aacf5c52-8891-4638-b518-1068ca37a946" path="/var/lib/kubelet/pods/aacf5c52-8891-4638-b518-1068ca37a946/volumes" May 15 10:23:46.111377 kubelet[1917]: I0515 10:23:46.111346 1917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e" path="/var/lib/kubelet/pods/e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e/volumes" May 15 10:23:46.165818 kubelet[1917]: E0515 10:23:46.165776 1917 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:23:46.178250 sshd[3538]: pam_unix(sshd:session): session closed for user core May 15 10:23:46.180921 systemd[1]: sshd@21-10.0.0.110:22-10.0.0.1:56786.service: Deactivated successfully. May 15 10:23:46.181489 systemd[1]: session-22.scope: Deactivated successfully. May 15 10:23:46.181632 systemd[1]: session-22.scope: Consumed 1.778s CPU time. May 15 10:23:46.182053 systemd-logind[1207]: Session 22 logged out. Waiting for processes to exit. May 15 10:23:46.183082 systemd[1]: Started sshd@22-10.0.0.110:22-10.0.0.1:50046.service. May 15 10:23:46.183996 systemd-logind[1207]: Removed session 22. May 15 10:23:46.222646 sshd[3700]: Accepted publickey for core from 10.0.0.1 port 50046 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:46.223800 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:46.227479 systemd-logind[1207]: New session 23 of user core. May 15 10:23:46.227947 systemd[1]: Started session-23.scope. May 15 10:23:47.483294 sshd[3700]: pam_unix(sshd:session): session closed for user core May 15 10:23:47.487099 systemd[1]: Started sshd@23-10.0.0.110:22-10.0.0.1:50052.service. May 15 10:23:47.488316 systemd[1]: sshd@22-10.0.0.110:22-10.0.0.1:50046.service: Deactivated successfully. May 15 10:23:47.488991 systemd[1]: session-23.scope: Deactivated successfully. May 15 10:23:47.489134 systemd[1]: session-23.scope: Consumed 1.167s CPU time. May 15 10:23:47.490487 systemd-logind[1207]: Session 23 logged out. Waiting for processes to exit. May 15 10:23:47.493547 kubelet[1917]: E0515 10:23:47.493514 1917 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e" containerName="cilium-operator" May 15 10:23:47.493547 kubelet[1917]: E0515 10:23:47.493542 1917 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aacf5c52-8891-4638-b518-1068ca37a946" containerName="mount-bpf-fs" May 15 10:23:47.493547 kubelet[1917]: E0515 10:23:47.493549 1917 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aacf5c52-8891-4638-b518-1068ca37a946" containerName="clean-cilium-state" May 15 10:23:47.493547 kubelet[1917]: E0515 10:23:47.493557 1917 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aacf5c52-8891-4638-b518-1068ca37a946" containerName="cilium-agent" May 15 10:23:47.493547 kubelet[1917]: E0515 10:23:47.493564 1917 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aacf5c52-8891-4638-b518-1068ca37a946" containerName="mount-cgroup" May 15 10:23:47.493547 kubelet[1917]: E0515 10:23:47.493577 1917 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aacf5c52-8891-4638-b518-1068ca37a946" containerName="apply-sysctl-overwrites" May 15 10:23:47.495998 kubelet[1917]: I0515 10:23:47.495958 1917 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d52a4b-e5b2-4aec-97fd-0e98a9fa226e" containerName="cilium-operator" May 15 10:23:47.495998 kubelet[1917]: I0515 10:23:47.495985 1917 memory_manager.go:354] "RemoveStaleState removing state" podUID="aacf5c52-8891-4638-b518-1068ca37a946" containerName="cilium-agent" May 15 10:23:47.497400 systemd-logind[1207]: Removed session 23. May 15 10:23:47.508309 kubelet[1917]: I0515 10:23:47.507867 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-lib-modules\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508309 kubelet[1917]: I0515 10:23:47.507902 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40d47a4-4d8a-4f31-b4ac-e4c744799503-clustermesh-secrets\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508309 kubelet[1917]: I0515 10:23:47.507969 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-host-proc-sys-net\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508309 kubelet[1917]: I0515 10:23:47.508021 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-bpf-maps\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508309 kubelet[1917]: I0515 10:23:47.508041 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-xtables-lock\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508309 kubelet[1917]: I0515 10:23:47.508059 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-run\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508548 kubelet[1917]: I0515 10:23:47.508075 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-host-proc-sys-kernel\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508548 kubelet[1917]: I0515 10:23:47.508091 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-ipsec-secrets\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508548 kubelet[1917]: I0515 10:23:47.508105 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-etc-cni-netd\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508548 kubelet[1917]: I0515 10:23:47.508120 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-hostproc\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508548 kubelet[1917]: I0515 10:23:47.508137 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-config-path\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508548 kubelet[1917]: I0515 10:23:47.508158 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-cgroup\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508692 kubelet[1917]: I0515 10:23:47.508173 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cj8\" (UniqueName: \"kubernetes.io/projected/f40d47a4-4d8a-4f31-b4ac-e4c744799503-kube-api-access-z8cj8\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508692 kubelet[1917]: I0515 10:23:47.508190 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cni-path\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.508692 kubelet[1917]: I0515 10:23:47.508205 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40d47a4-4d8a-4f31-b4ac-e4c744799503-hubble-tls\") pod \"cilium-dkv8g\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " pod="kube-system/cilium-dkv8g" May 15 10:23:47.514479 systemd[1]: Created slice kubepods-burstable-podf40d47a4_4d8a_4f31_b4ac_e4c744799503.slice. May 15 10:23:47.545905 sshd[3711]: Accepted publickey for core from 10.0.0.1 port 50052 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:47.547511 sshd[3711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:47.550596 systemd-logind[1207]: New session 24 of user core. May 15 10:23:47.551429 systemd[1]: Started session-24.scope. May 15 10:23:47.679789 sshd[3711]: pam_unix(sshd:session): session closed for user core May 15 10:23:47.683303 systemd[1]: Started sshd@24-10.0.0.110:22-10.0.0.1:50066.service. May 15 10:23:47.683836 systemd[1]: sshd@23-10.0.0.110:22-10.0.0.1:50052.service: Deactivated successfully. May 15 10:23:47.684514 systemd[1]: session-24.scope: Deactivated successfully. May 15 10:23:47.685971 systemd-logind[1207]: Session 24 logged out. Waiting for processes to exit. May 15 10:23:47.688113 systemd-logind[1207]: Removed session 24. May 15 10:23:47.692660 kubelet[1917]: E0515 10:23:47.692629 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:47.693950 env[1221]: time="2025-05-15T10:23:47.693910107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dkv8g,Uid:f40d47a4-4d8a-4f31-b4ac-e4c744799503,Namespace:kube-system,Attempt:0,}" May 15 10:23:47.705354 env[1221]: time="2025-05-15T10:23:47.705292989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:23:47.705493 env[1221]: time="2025-05-15T10:23:47.705465223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:23:47.705576 env[1221]: time="2025-05-15T10:23:47.705549579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:23:47.706491 env[1221]: time="2025-05-15T10:23:47.705887486Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6 pid=3737 runtime=io.containerd.runc.v2 May 15 10:23:47.715438 systemd[1]: Started cri-containerd-c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6.scope. May 15 10:23:47.720561 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 50066 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:23:47.721841 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:23:47.726521 systemd-logind[1207]: New session 25 of user core. May 15 10:23:47.726914 systemd[1]: Started session-25.scope. May 15 10:23:47.744270 env[1221]: time="2025-05-15T10:23:47.742556235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dkv8g,Uid:f40d47a4-4d8a-4f31-b4ac-e4c744799503,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6\"" May 15 10:23:47.745005 kubelet[1917]: E0515 10:23:47.744984 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:47.748601 env[1221]: time="2025-05-15T10:23:47.748527605Z" level=info msg="CreateContainer within sandbox \"c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:23:47.761226 env[1221]: time="2025-05-15T10:23:47.761170399Z" level=info msg="CreateContainer within sandbox \"c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e\"" May 15 10:23:47.761677 env[1221]: time="2025-05-15T10:23:47.761642541Z" level=info msg="StartContainer for \"eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e\"" May 15 10:23:47.775828 systemd[1]: Started cri-containerd-eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e.scope. May 15 10:23:47.794630 systemd[1]: cri-containerd-eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e.scope: Deactivated successfully. May 15 10:23:47.813831 env[1221]: time="2025-05-15T10:23:47.813769615Z" level=info msg="shim disconnected" id=eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e May 15 10:23:47.813831 env[1221]: time="2025-05-15T10:23:47.813830452Z" level=warning msg="cleaning up after shim disconnected" id=eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e namespace=k8s.io May 15 10:23:47.813831 env[1221]: time="2025-05-15T10:23:47.813840012Z" level=info msg="cleaning up dead shim" May 15 10:23:47.823744 env[1221]: time="2025-05-15T10:23:47.822010738Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:23:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3804 runtime=io.containerd.runc.v2\ntime=\"2025-05-15T10:23:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 15 10:23:47.823744 env[1221]: time="2025-05-15T10:23:47.822266688Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" May 15 10:23:47.823988 env[1221]: time="2025-05-15T10:23:47.823935504Z" level=error msg="Failed to pipe stdout of container \"eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e\"" error="reading from a closed fifo" May 15 10:23:47.824038 env[1221]: time="2025-05-15T10:23:47.824016100Z" level=error msg="Failed to pipe stderr of container \"eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e\"" error="reading from a closed fifo" May 15 10:23:47.825891 env[1221]: time="2025-05-15T10:23:47.825813711Z" level=error msg="StartContainer for \"eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 15 10:23:47.827185 kubelet[1917]: E0515 10:23:47.827149 1917 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e" May 15 10:23:47.828639 kubelet[1917]: E0515 10:23:47.828604 1917 kuberuntime_manager.go:1272] "Unhandled Error" err=< May 15 10:23:47.828639 kubelet[1917]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 15 10:23:47.828639 kubelet[1917]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 15 10:23:47.828639 kubelet[1917]: rm /hostbin/cilium-mount May 15 10:23:47.828851 kubelet[1917]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8cj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-dkv8g_kube-system(f40d47a4-4d8a-4f31-b4ac-e4c744799503): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 15 10:23:47.828851 kubelet[1917]: > logger="UnhandledError" May 15 10:23:47.833252 kubelet[1917]: E0515 10:23:47.833185 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dkv8g" podUID="f40d47a4-4d8a-4f31-b4ac-e4c744799503" May 15 10:23:48.138938 kubelet[1917]: I0515 10:23:48.138826 1917 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T10:23:48Z","lastTransitionTime":"2025-05-15T10:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 10:23:48.281751 env[1221]: time="2025-05-15T10:23:48.281703618Z" level=info msg="StopPodSandbox for \"c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6\"" May 15 10:23:48.281894 env[1221]: time="2025-05-15T10:23:48.281764696Z" level=info msg="Container to stop \"eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:23:48.294135 systemd[1]: cri-containerd-c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6.scope: Deactivated successfully. May 15 10:23:48.322978 env[1221]: time="2025-05-15T10:23:48.322924626Z" level=info msg="shim disconnected" id=c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6 May 15 10:23:48.322978 env[1221]: time="2025-05-15T10:23:48.322974304Z" level=warning msg="cleaning up after shim disconnected" id=c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6 namespace=k8s.io May 15 10:23:48.322978 env[1221]: time="2025-05-15T10:23:48.322983064Z" level=info msg="cleaning up dead shim" May 15 10:23:48.330928 env[1221]: time="2025-05-15T10:23:48.330889110Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:23:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3834 runtime=io.containerd.runc.v2\n" May 15 10:23:48.331202 env[1221]: time="2025-05-15T10:23:48.331163740Z" level=info msg="TearDown network for sandbox \"c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6\" successfully" May 15 10:23:48.331202 env[1221]: time="2025-05-15T10:23:48.331194779Z" level=info msg="StopPodSandbox for \"c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6\" returns successfully" May 15 10:23:48.415033 kubelet[1917]: I0515 10:23:48.414906 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-ipsec-secrets\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415033 kubelet[1917]: I0515 10:23:48.414953 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40d47a4-4d8a-4f31-b4ac-e4c744799503-clustermesh-secrets\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415033 kubelet[1917]: I0515 10:23:48.414974 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-config-path\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415033 kubelet[1917]: I0515 10:23:48.414991 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-xtables-lock\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415033 kubelet[1917]: I0515 10:23:48.415011 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-hostproc\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415033 kubelet[1917]: I0515 10:23:48.415027 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40d47a4-4d8a-4f31-b4ac-e4c744799503-hubble-tls\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415281 kubelet[1917]: I0515 10:23:48.415042 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-host-proc-sys-net\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415281 kubelet[1917]: I0515 10:23:48.415058 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-cgroup\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415281 kubelet[1917]: I0515 10:23:48.415076 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8cj8\" (UniqueName: \"kubernetes.io/projected/f40d47a4-4d8a-4f31-b4ac-e4c744799503-kube-api-access-z8cj8\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415281 kubelet[1917]: I0515 10:23:48.415089 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-host-proc-sys-kernel\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415281 kubelet[1917]: I0515 10:23:48.415125 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cni-path\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415281 kubelet[1917]: I0515 10:23:48.415140 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-bpf-maps\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415281 kubelet[1917]: I0515 10:23:48.415154 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-etc-cni-netd\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415281 kubelet[1917]: I0515 10:23:48.415169 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-run\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415281 kubelet[1917]: I0515 10:23:48.415187 1917 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-lib-modules\") pod \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\" (UID: \"f40d47a4-4d8a-4f31-b4ac-e4c744799503\") " May 15 10:23:48.415281 kubelet[1917]: I0515 10:23:48.415251 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:48.415623 kubelet[1917]: I0515 10:23:48.415544 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:48.416707 kubelet[1917]: I0515 10:23:48.415738 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cni-path" (OuterVolumeSpecName: "cni-path") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:48.418431 kubelet[1917]: I0515 10:23:48.417846 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:48.418700 kubelet[1917]: I0515 10:23:48.418015 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:48.418700 kubelet[1917]: I0515 10:23:48.418136 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40d47a4-4d8a-4f31-b4ac-e4c744799503-kube-api-access-z8cj8" (OuterVolumeSpecName: "kube-api-access-z8cj8") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "kube-api-access-z8cj8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:23:48.418700 kubelet[1917]: I0515 10:23:48.418159 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:48.418700 kubelet[1917]: I0515 10:23:48.418173 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-hostproc" (OuterVolumeSpecName: "hostproc") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:48.418700 kubelet[1917]: I0515 10:23:48.418183 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:48.418700 kubelet[1917]: I0515 10:23:48.418195 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:48.418700 kubelet[1917]: I0515 10:23:48.418531 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:23:48.420312 kubelet[1917]: I0515 10:23:48.420270 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 10:23:48.421121 kubelet[1917]: I0515 10:23:48.421090 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 10:23:48.421557 kubelet[1917]: I0515 10:23:48.421535 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40d47a4-4d8a-4f31-b4ac-e4c744799503-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 10:23:48.421632 kubelet[1917]: I0515 10:23:48.421615 1917 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40d47a4-4d8a-4f31-b4ac-e4c744799503-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f40d47a4-4d8a-4f31-b4ac-e4c744799503" (UID: "f40d47a4-4d8a-4f31-b4ac-e4c744799503"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:23:48.516140 kubelet[1917]: I0515 10:23:48.516094 1917 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516140 kubelet[1917]: I0515 10:23:48.516128 1917 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516140 kubelet[1917]: I0515 10:23:48.516136 1917 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40d47a4-4d8a-4f31-b4ac-e4c744799503-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516140 kubelet[1917]: I0515 10:23:48.516147 1917 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516583 kubelet[1917]: I0515 10:23:48.516155 1917 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40d47a4-4d8a-4f31-b4ac-e4c744799503-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516583 kubelet[1917]: I0515 10:23:48.516162 1917 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516583 kubelet[1917]: I0515 10:23:48.516171 1917 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516583 kubelet[1917]: I0515 10:23:48.516186 1917 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516583 kubelet[1917]: I0515 10:23:48.516193 1917 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z8cj8\" (UniqueName: \"kubernetes.io/projected/f40d47a4-4d8a-4f31-b4ac-e4c744799503-kube-api-access-z8cj8\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516583 kubelet[1917]: I0515 10:23:48.516201 1917 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516583 kubelet[1917]: I0515 10:23:48.516210 1917 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516583 kubelet[1917]: I0515 10:23:48.516217 1917 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516583 kubelet[1917]: I0515 10:23:48.516224 1917 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516583 kubelet[1917]: I0515 10:23:48.516231 1917 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.516583 kubelet[1917]: I0515 10:23:48.516238 1917 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40d47a4-4d8a-4f31-b4ac-e4c744799503-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 10:23:48.614250 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c9e774201524adc18159d897ae9f182208ef218f2ded6110b49692dcab26a9c6-shm.mount: Deactivated successfully. May 15 10:23:48.614356 systemd[1]: var-lib-kubelet-pods-f40d47a4\x2d4d8a\x2d4f31\x2db4ac\x2de4c744799503-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz8cj8.mount: Deactivated successfully. May 15 10:23:48.614432 systemd[1]: var-lib-kubelet-pods-f40d47a4\x2d4d8a\x2d4f31\x2db4ac\x2de4c744799503-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 10:23:48.614485 systemd[1]: var-lib-kubelet-pods-f40d47a4\x2d4d8a\x2d4f31\x2db4ac\x2de4c744799503-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 10:23:48.614536 systemd[1]: var-lib-kubelet-pods-f40d47a4\x2d4d8a\x2d4f31\x2db4ac\x2de4c744799503-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 15 10:23:49.109472 kubelet[1917]: E0515 10:23:49.109433 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:49.283909 kubelet[1917]: I0515 10:23:49.283868 1917 scope.go:117] "RemoveContainer" containerID="eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e" May 15 10:23:49.289586 systemd[1]: Removed slice kubepods-burstable-podf40d47a4_4d8a_4f31_b4ac_e4c744799503.slice. May 15 10:23:49.290157 env[1221]: time="2025-05-15T10:23:49.290118383Z" level=info msg="RemoveContainer for \"eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e\"" May 15 10:23:49.297548 env[1221]: time="2025-05-15T10:23:49.297506594Z" level=info msg="RemoveContainer for \"eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e\" returns successfully" May 15 10:23:50.113270 kubelet[1917]: I0515 10:23:50.113218 1917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40d47a4-4d8a-4f31-b4ac-e4c744799503" path="/var/lib/kubelet/pods/f40d47a4-4d8a-4f31-b4ac-e4c744799503/volumes" May 15 10:23:50.831527 kubelet[1917]: E0515 10:23:50.831478 1917 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40d47a4-4d8a-4f31-b4ac-e4c744799503" containerName="mount-cgroup" May 15 10:23:50.831527 kubelet[1917]: I0515 10:23:50.831529 1917 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40d47a4-4d8a-4f31-b4ac-e4c744799503" containerName="mount-cgroup" May 15 10:23:50.836523 systemd[1]: Created slice kubepods-burstable-pod36f52b65_3cf2_43fe_9a65_fa462ed53e45.slice. May 15 10:23:50.920588 kubelet[1917]: W0515 10:23:50.920535 1917 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf40d47a4_4d8a_4f31_b4ac_e4c744799503.slice/cri-containerd-eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e.scope WatchSource:0}: container "eb3b26c13f30a2355d0fdc4e0b11a8eb9c93a241594eea3cc303840b3c46501e" in namespace "k8s.io": not found May 15 10:23:50.929610 kubelet[1917]: I0515 10:23:50.929584 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36f52b65-3cf2-43fe-9a65-fa462ed53e45-etc-cni-netd\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.929772 kubelet[1917]: I0515 10:23:50.929758 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/36f52b65-3cf2-43fe-9a65-fa462ed53e45-cilium-ipsec-secrets\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.929873 kubelet[1917]: I0515 10:23:50.929857 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn92j\" (UniqueName: \"kubernetes.io/projected/36f52b65-3cf2-43fe-9a65-fa462ed53e45-kube-api-access-sn92j\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.929958 kubelet[1917]: I0515 10:23:50.929946 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36f52b65-3cf2-43fe-9a65-fa462ed53e45-cni-path\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.930050 kubelet[1917]: I0515 10:23:50.930037 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36f52b65-3cf2-43fe-9a65-fa462ed53e45-xtables-lock\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.930130 kubelet[1917]: I0515 10:23:50.930118 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36f52b65-3cf2-43fe-9a65-fa462ed53e45-clustermesh-secrets\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.930222 kubelet[1917]: I0515 10:23:50.930210 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36f52b65-3cf2-43fe-9a65-fa462ed53e45-lib-modules\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.930318 kubelet[1917]: I0515 10:23:50.930305 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36f52b65-3cf2-43fe-9a65-fa462ed53e45-bpf-maps\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.930414 kubelet[1917]: I0515 10:23:50.930403 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36f52b65-3cf2-43fe-9a65-fa462ed53e45-hostproc\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.930522 kubelet[1917]: I0515 10:23:50.930510 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36f52b65-3cf2-43fe-9a65-fa462ed53e45-cilium-config-path\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.930625 kubelet[1917]: I0515 10:23:50.930613 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36f52b65-3cf2-43fe-9a65-fa462ed53e45-host-proc-sys-net\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.930749 kubelet[1917]: I0515 10:23:50.930734 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36f52b65-3cf2-43fe-9a65-fa462ed53e45-hubble-tls\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.930866 kubelet[1917]: I0515 10:23:50.930854 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36f52b65-3cf2-43fe-9a65-fa462ed53e45-host-proc-sys-kernel\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.930978 kubelet[1917]: I0515 10:23:50.930965 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36f52b65-3cf2-43fe-9a65-fa462ed53e45-cilium-cgroup\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:50.931075 kubelet[1917]: I0515 10:23:50.931063 1917 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36f52b65-3cf2-43fe-9a65-fa462ed53e45-cilium-run\") pod \"cilium-wc7bd\" (UID: \"36f52b65-3cf2-43fe-9a65-fa462ed53e45\") " pod="kube-system/cilium-wc7bd" May 15 10:23:51.138651 kubelet[1917]: E0515 10:23:51.138544 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:51.139073 env[1221]: time="2025-05-15T10:23:51.139022103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wc7bd,Uid:36f52b65-3cf2-43fe-9a65-fa462ed53e45,Namespace:kube-system,Attempt:0,}" May 15 10:23:51.151413 env[1221]: time="2025-05-15T10:23:51.151359085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:23:51.151577 env[1221]: time="2025-05-15T10:23:51.151395044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:23:51.151577 env[1221]: time="2025-05-15T10:23:51.151405484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:23:51.151697 env[1221]: time="2025-05-15T10:23:51.151617679Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9 pid=3862 runtime=io.containerd.runc.v2 May 15 10:23:51.162149 systemd[1]: Started cri-containerd-91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9.scope. May 15 10:23:51.167046 kubelet[1917]: E0515 10:23:51.167005 1917 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:23:51.188798 env[1221]: time="2025-05-15T10:23:51.188749102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wc7bd,Uid:36f52b65-3cf2-43fe-9a65-fa462ed53e45,Namespace:kube-system,Attempt:0,} returns sandbox id \"91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9\"" May 15 10:23:51.189385 kubelet[1917]: E0515 10:23:51.189364 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:51.191630 env[1221]: time="2025-05-15T10:23:51.191588834Z" level=info msg="CreateContainer within sandbox \"91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:23:51.200306 env[1221]: time="2025-05-15T10:23:51.200252304Z" level=info msg="CreateContainer within sandbox \"91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"94c70737769768ce1f3d7977ba3539658827b307b2b5347aa7828c57bd058d95\"" May 15 10:23:51.200850 env[1221]: time="2025-05-15T10:23:51.200826570Z" level=info msg="StartContainer for \"94c70737769768ce1f3d7977ba3539658827b307b2b5347aa7828c57bd058d95\"" May 15 10:23:51.213804 systemd[1]: Started cri-containerd-94c70737769768ce1f3d7977ba3539658827b307b2b5347aa7828c57bd058d95.scope. May 15 10:23:51.245725 env[1221]: time="2025-05-15T10:23:51.244753789Z" level=info msg="StartContainer for \"94c70737769768ce1f3d7977ba3539658827b307b2b5347aa7828c57bd058d95\" returns successfully" May 15 10:23:51.254034 systemd[1]: cri-containerd-94c70737769768ce1f3d7977ba3539658827b307b2b5347aa7828c57bd058d95.scope: Deactivated successfully. May 15 10:23:51.276652 env[1221]: time="2025-05-15T10:23:51.276602860Z" level=info msg="shim disconnected" id=94c70737769768ce1f3d7977ba3539658827b307b2b5347aa7828c57bd058d95 May 15 10:23:51.276871 env[1221]: time="2025-05-15T10:23:51.276850974Z" level=warning msg="cleaning up after shim disconnected" id=94c70737769768ce1f3d7977ba3539658827b307b2b5347aa7828c57bd058d95 namespace=k8s.io May 15 10:23:51.276950 env[1221]: time="2025-05-15T10:23:51.276936972Z" level=info msg="cleaning up dead shim" May 15 10:23:51.283728 env[1221]: time="2025-05-15T10:23:51.283697649Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:23:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3947 runtime=io.containerd.runc.v2\n" May 15 10:23:51.288901 kubelet[1917]: E0515 10:23:51.288870 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:51.292832 env[1221]: time="2025-05-15T10:23:51.292800109Z" level=info msg="CreateContainer within sandbox \"91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:23:51.309727 env[1221]: time="2025-05-15T10:23:51.309688901Z" level=info msg="CreateContainer within sandbox \"91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9545d86da391f663a79fb49da922b3b810826a7e93f13f89fa7dc37180312d01\"" May 15 10:23:51.311832 env[1221]: time="2025-05-15T10:23:51.311805890Z" level=info msg="StartContainer for \"9545d86da391f663a79fb49da922b3b810826a7e93f13f89fa7dc37180312d01\"" May 15 10:23:51.329749 systemd[1]: Started cri-containerd-9545d86da391f663a79fb49da922b3b810826a7e93f13f89fa7dc37180312d01.scope. May 15 10:23:51.364017 env[1221]: time="2025-05-15T10:23:51.363972230Z" level=info msg="StartContainer for \"9545d86da391f663a79fb49da922b3b810826a7e93f13f89fa7dc37180312d01\" returns successfully" May 15 10:23:51.371335 systemd[1]: cri-containerd-9545d86da391f663a79fb49da922b3b810826a7e93f13f89fa7dc37180312d01.scope: Deactivated successfully. May 15 10:23:51.391067 env[1221]: time="2025-05-15T10:23:51.390104599Z" level=info msg="shim disconnected" id=9545d86da391f663a79fb49da922b3b810826a7e93f13f89fa7dc37180312d01 May 15 10:23:51.391067 env[1221]: time="2025-05-15T10:23:51.390152878Z" level=warning msg="cleaning up after shim disconnected" id=9545d86da391f663a79fb49da922b3b810826a7e93f13f89fa7dc37180312d01 namespace=k8s.io May 15 10:23:51.391067 env[1221]: time="2025-05-15T10:23:51.390163277Z" level=info msg="cleaning up dead shim" May 15 10:23:51.397245 env[1221]: time="2025-05-15T10:23:51.397191268Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:23:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n" May 15 10:23:52.292243 kubelet[1917]: E0515 10:23:52.292190 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:52.294694 env[1221]: time="2025-05-15T10:23:52.294640873Z" level=info msg="CreateContainer within sandbox \"91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:23:52.324144 env[1221]: time="2025-05-15T10:23:52.324087619Z" level=info msg="CreateContainer within sandbox \"91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3a1aa041d3860cf18252ec9b876ac3bf6e3b6f48de0113d46bf222118e80327d\"" May 15 10:23:52.324812 env[1221]: time="2025-05-15T10:23:52.324773085Z" level=info msg="StartContainer for \"3a1aa041d3860cf18252ec9b876ac3bf6e3b6f48de0113d46bf222118e80327d\"" May 15 10:23:52.341358 systemd[1]: Started cri-containerd-3a1aa041d3860cf18252ec9b876ac3bf6e3b6f48de0113d46bf222118e80327d.scope. May 15 10:23:52.382714 env[1221]: time="2025-05-15T10:23:52.382658918Z" level=info msg="StartContainer for \"3a1aa041d3860cf18252ec9b876ac3bf6e3b6f48de0113d46bf222118e80327d\" returns successfully" May 15 10:23:52.387371 systemd[1]: cri-containerd-3a1aa041d3860cf18252ec9b876ac3bf6e3b6f48de0113d46bf222118e80327d.scope: Deactivated successfully. May 15 10:23:52.408811 env[1221]: time="2025-05-15T10:23:52.408750774Z" level=info msg="shim disconnected" id=3a1aa041d3860cf18252ec9b876ac3bf6e3b6f48de0113d46bf222118e80327d May 15 10:23:52.408811 env[1221]: time="2025-05-15T10:23:52.408800613Z" level=warning msg="cleaning up after shim disconnected" id=3a1aa041d3860cf18252ec9b876ac3bf6e3b6f48de0113d46bf222118e80327d namespace=k8s.io May 15 10:23:52.408811 env[1221]: time="2025-05-15T10:23:52.408810893Z" level=info msg="cleaning up dead shim" May 15 10:23:52.415442 env[1221]: time="2025-05-15T10:23:52.415395956Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:23:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4066 runtime=io.containerd.runc.v2\n" May 15 10:23:53.036657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a1aa041d3860cf18252ec9b876ac3bf6e3b6f48de0113d46bf222118e80327d-rootfs.mount: Deactivated successfully. May 15 10:23:53.295679 kubelet[1917]: E0515 10:23:53.295560 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:53.297819 env[1221]: time="2025-05-15T10:23:53.297778751Z" level=info msg="CreateContainer within sandbox \"91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:23:53.311792 env[1221]: time="2025-05-15T10:23:53.311750465Z" level=info msg="CreateContainer within sandbox \"91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"33a3f4e26b94db02a467b2f73c8d258269f118fb7ccc6e622bda6330eda6b737\"" May 15 10:23:53.314210 env[1221]: time="2025-05-15T10:23:53.312319855Z" level=info msg="StartContainer for \"33a3f4e26b94db02a467b2f73c8d258269f118fb7ccc6e622bda6330eda6b737\"" May 15 10:23:53.326993 systemd[1]: Started cri-containerd-33a3f4e26b94db02a467b2f73c8d258269f118fb7ccc6e622bda6330eda6b737.scope. May 15 10:23:53.360647 env[1221]: time="2025-05-15T10:23:53.360607163Z" level=info msg="StartContainer for \"33a3f4e26b94db02a467b2f73c8d258269f118fb7ccc6e622bda6330eda6b737\" returns successfully" May 15 10:23:53.363129 systemd[1]: cri-containerd-33a3f4e26b94db02a467b2f73c8d258269f118fb7ccc6e622bda6330eda6b737.scope: Deactivated successfully. May 15 10:23:53.381315 env[1221]: time="2025-05-15T10:23:53.381275078Z" level=info msg="shim disconnected" id=33a3f4e26b94db02a467b2f73c8d258269f118fb7ccc6e622bda6330eda6b737 May 15 10:23:53.381512 env[1221]: time="2025-05-15T10:23:53.381491154Z" level=warning msg="cleaning up after shim disconnected" id=33a3f4e26b94db02a467b2f73c8d258269f118fb7ccc6e622bda6330eda6b737 namespace=k8s.io May 15 10:23:53.381574 env[1221]: time="2025-05-15T10:23:53.381561033Z" level=info msg="cleaning up dead shim" May 15 10:23:53.388298 env[1221]: time="2025-05-15T10:23:53.388269794Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:23:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4121 runtime=io.containerd.runc.v2\n" May 15 10:23:54.036646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33a3f4e26b94db02a467b2f73c8d258269f118fb7ccc6e622bda6330eda6b737-rootfs.mount: Deactivated successfully. May 15 10:23:54.300307 kubelet[1917]: E0515 10:23:54.300201 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:54.302699 env[1221]: time="2025-05-15T10:23:54.302608359Z" level=info msg="CreateContainer within sandbox \"91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:23:54.316251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188776974.mount: Deactivated successfully. May 15 10:23:54.323410 env[1221]: time="2025-05-15T10:23:54.323364897Z" level=info msg="CreateContainer within sandbox \"91b5a5e1d810b31087e7310928e1f90963064b9a82e1b79c597c957cccc3dee9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1522f02e955b48c686bfd847cb5095d040ebbe13e940aee753f2aad2783e38a6\"" May 15 10:23:54.324053 env[1221]: time="2025-05-15T10:23:54.324026647Z" level=info msg="StartContainer for \"1522f02e955b48c686bfd847cb5095d040ebbe13e940aee753f2aad2783e38a6\"" May 15 10:23:54.337754 systemd[1]: Started cri-containerd-1522f02e955b48c686bfd847cb5095d040ebbe13e940aee753f2aad2783e38a6.scope. May 15 10:23:54.368796 env[1221]: time="2025-05-15T10:23:54.368751997Z" level=info msg="StartContainer for \"1522f02e955b48c686bfd847cb5095d040ebbe13e940aee753f2aad2783e38a6\" returns successfully" May 15 10:23:54.609694 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 15 10:23:55.304965 kubelet[1917]: E0515 10:23:55.304919 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:55.321048 kubelet[1917]: I0515 10:23:55.320972 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wc7bd" podStartSLOduration=6.320947194 podStartE2EDuration="6.320947194s" podCreationTimestamp="2025-05-15 10:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:23:55.320610878 +0000 UTC m=+89.304376441" watchObservedRunningTime="2025-05-15 10:23:55.320947194 +0000 UTC m=+89.304712757" May 15 10:23:56.069070 systemd[1]: run-containerd-runc-k8s.io-1522f02e955b48c686bfd847cb5095d040ebbe13e940aee753f2aad2783e38a6-runc.gIBSEO.mount: Deactivated successfully. May 15 10:23:57.140027 kubelet[1917]: E0515 10:23:57.139990 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:57.418494 systemd-networkd[1044]: lxc_health: Link UP May 15 10:23:57.427173 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:23:57.426741 systemd-networkd[1044]: lxc_health: Gained carrier May 15 10:23:58.182528 systemd[1]: run-containerd-runc-k8s.io-1522f02e955b48c686bfd847cb5095d040ebbe13e940aee753f2aad2783e38a6-runc.tuzzNP.mount: Deactivated successfully. May 15 10:23:59.048168 systemd-networkd[1044]: lxc_health: Gained IPv6LL May 15 10:23:59.140253 kubelet[1917]: E0515 10:23:59.140215 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:23:59.311246 kubelet[1917]: E0515 10:23:59.311135 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:24:00.312511 kubelet[1917]: E0515 10:24:00.312488 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:24:02.489169 sshd[3728]: pam_unix(sshd:session): session closed for user core May 15 10:24:02.491799 systemd[1]: sshd@24-10.0.0.110:22-10.0.0.1:50066.service: Deactivated successfully. May 15 10:24:02.492520 systemd[1]: session-25.scope: Deactivated successfully. May 15 10:24:02.493335 systemd-logind[1207]: Session 25 logged out. Waiting for processes to exit. May 15 10:24:02.494008 systemd-logind[1207]: Removed session 25. May 15 10:24:03.108989 kubelet[1917]: E0515 10:24:03.108954 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"