May 13 00:35:29.748975 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:35:29.748995 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon May 12 23:22:00 -00 2025 May 13 00:35:29.749003 kernel: efi: EFI v2.70 by EDK II May 13 00:35:29.749009 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 13 00:35:29.749014 kernel: random: crng init done May 13 00:35:29.749020 kernel: ACPI: Early table checksum verification disabled May 13 00:35:29.749026 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 13 00:35:29.749033 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:35:29.749039 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:29.749044 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:29.749050 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:29.749055 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:29.749061 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:29.749066 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:29.749074 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:29.749081 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:29.749087 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:35:29.749093 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:35:29.749099 kernel: NUMA: Failed to initialise from firmware May 13 00:35:29.749105 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:35:29.749111 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] May 13 00:35:29.749116 kernel: Zone ranges: May 13 00:35:29.749122 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:35:29.749129 kernel: DMA32 empty May 13 00:35:29.749135 kernel: Normal empty May 13 00:35:29.749140 kernel: Movable zone start for each node May 13 00:35:29.749146 kernel: Early memory node ranges May 13 00:35:29.749152 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 13 00:35:29.749158 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 13 00:35:29.749164 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 13 00:35:29.749170 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 13 00:35:29.749176 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 13 00:35:29.749182 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 13 00:35:29.749188 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 13 00:35:29.749194 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:35:29.749201 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:35:29.749207 kernel: psci: probing for conduit method from ACPI. May 13 00:35:29.749212 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:35:29.749218 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:35:29.749224 kernel: psci: Trusted OS migration not required May 13 00:35:29.749232 kernel: psci: SMC Calling Convention v1.1 May 13 00:35:29.749239 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:35:29.749246 kernel: ACPI: SRAT not present May 13 00:35:29.749253 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 13 00:35:29.749259 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 13 00:35:29.749265 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:35:29.749272 kernel: Detected PIPT I-cache on CPU0 May 13 00:35:29.749278 kernel: CPU features: detected: GIC system register CPU interface May 13 00:35:29.749284 kernel: CPU features: detected: Hardware dirty bit management May 13 00:35:29.749296 kernel: CPU features: detected: Spectre-v4 May 13 00:35:29.749304 kernel: CPU features: detected: Spectre-BHB May 13 00:35:29.749312 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:35:29.749318 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:35:29.749325 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:35:29.749331 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:35:29.749337 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:35:29.749343 kernel: Policy zone: DMA May 13 00:35:29.749350 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:35:29.749357 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:35:29.749363 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:35:29.749369 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:35:29.749375 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:35:29.749383 kernel: Memory: 2457336K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114952K reserved, 0K cma-reserved) May 13 00:35:29.749389 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:35:29.749395 kernel: trace event string verifier disabled May 13 00:35:29.749402 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:35:29.749408 kernel: rcu: RCU event tracing is enabled. May 13 00:35:29.749414 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:35:29.749421 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:35:29.749427 kernel: Tracing variant of Tasks RCU enabled. May 13 00:35:29.749433 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:35:29.749440 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:35:29.749446 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:35:29.749453 kernel: GICv3: 256 SPIs implemented May 13 00:35:29.749459 kernel: GICv3: 0 Extended SPIs implemented May 13 00:35:29.749465 kernel: GICv3: Distributor has no Range Selector support May 13 00:35:29.749471 kernel: Root IRQ handler: gic_handle_irq May 13 00:35:29.749477 kernel: GICv3: 16 PPIs implemented May 13 00:35:29.749483 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:35:29.749490 kernel: ACPI: SRAT not present May 13 00:35:29.749495 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:35:29.749502 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:35:29.749508 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 13 00:35:29.749514 kernel: GICv3: using LPI property table @0x00000000400d0000 May 13 00:35:29.749521 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 13 00:35:29.749528 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:35:29.749534 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:35:29.749541 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:35:29.749547 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:35:29.749553 kernel: arm-pv: using stolen time PV May 13 00:35:29.749560 kernel: Console: colour dummy device 80x25 May 13 00:35:29.749566 kernel: ACPI: Core revision 20210730 May 13 00:35:29.749573 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:35:29.749579 kernel: pid_max: default: 32768 minimum: 301 May 13 00:35:29.749585 kernel: LSM: Security Framework initializing May 13 00:35:29.749593 kernel: SELinux: Initializing. May 13 00:35:29.749599 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:35:29.749605 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:35:29.749612 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:35:29.749618 kernel: rcu: Hierarchical SRCU implementation. May 13 00:35:29.749624 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:35:29.749631 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:35:29.749637 kernel: Remapping and enabling EFI services. May 13 00:35:29.749643 kernel: smp: Bringing up secondary CPUs ... May 13 00:35:29.749651 kernel: Detected PIPT I-cache on CPU1 May 13 00:35:29.749657 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:35:29.749664 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 13 00:35:29.749670 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:35:29.749677 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:35:29.749683 kernel: Detected PIPT I-cache on CPU2 May 13 00:35:29.749690 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:35:29.749733 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 13 00:35:29.749740 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:35:29.749746 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:35:29.749755 kernel: Detected PIPT I-cache on CPU3 May 13 00:35:29.749761 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:35:29.749767 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 13 00:35:29.749774 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:35:29.749785 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:35:29.749792 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:35:29.749799 kernel: SMP: Total of 4 processors activated. May 13 00:35:29.749806 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:35:29.749813 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:35:29.749819 kernel: CPU features: detected: Common not Private translations May 13 00:35:29.749826 kernel: CPU features: detected: CRC32 instructions May 13 00:35:29.749832 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:35:29.749840 kernel: CPU features: detected: LSE atomic instructions May 13 00:35:29.749847 kernel: CPU features: detected: Privileged Access Never May 13 00:35:29.749854 kernel: CPU features: detected: RAS Extension Support May 13 00:35:29.749861 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:35:29.749867 kernel: CPU: All CPU(s) started at EL1 May 13 00:35:29.749875 kernel: alternatives: patching kernel code May 13 00:35:29.749882 kernel: devtmpfs: initialized May 13 00:35:29.749889 kernel: KASLR enabled May 13 00:35:29.749895 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:35:29.749902 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:35:29.749909 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:35:29.749915 kernel: SMBIOS 3.0.0 present. May 13 00:35:29.749922 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 13 00:35:29.749929 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:35:29.749936 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:35:29.749943 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:35:29.749950 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:35:29.749957 kernel: audit: initializing netlink subsys (disabled) May 13 00:35:29.749964 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 May 13 00:35:29.749970 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:35:29.749980 kernel: cpuidle: using governor menu May 13 00:35:29.749987 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:35:29.749993 kernel: ASID allocator initialised with 32768 entries May 13 00:35:29.750001 kernel: ACPI: bus type PCI registered May 13 00:35:29.750008 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:35:29.750014 kernel: Serial: AMBA PL011 UART driver May 13 00:35:29.750021 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:35:29.750028 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:35:29.750034 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:35:29.750041 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:35:29.750048 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:35:29.750054 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:35:29.750062 kernel: ACPI: Added _OSI(Module Device) May 13 00:35:29.750069 kernel: ACPI: Added _OSI(Processor Device) May 13 00:35:29.750075 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:35:29.750082 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:35:29.750099 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:35:29.750106 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:35:29.750113 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:35:29.750120 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:35:29.750127 kernel: ACPI: Interpreter enabled May 13 00:35:29.750137 kernel: ACPI: Using GIC for interrupt routing May 13 00:35:29.750144 kernel: ACPI: MCFG table detected, 1 entries May 13 00:35:29.750150 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:35:29.750157 kernel: printk: console [ttyAMA0] enabled May 13 00:35:29.750164 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:35:29.750305 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:35:29.750376 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:35:29.750441 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:35:29.750501 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:35:29.750561 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:35:29.750570 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:35:29.750577 kernel: PCI host bridge to bus 0000:00 May 13 00:35:29.750644 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:35:29.750721 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:35:29.750779 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:35:29.750835 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:35:29.750908 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:35:29.750977 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:35:29.751040 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:35:29.751101 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:35:29.751163 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:35:29.751226 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:35:29.751286 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:35:29.751359 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:35:29.751418 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:35:29.751472 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:35:29.751528 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:35:29.751537 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:35:29.751543 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:35:29.751552 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:35:29.751559 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:35:29.751566 kernel: iommu: Default domain type: Translated May 13 00:35:29.751572 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:35:29.751579 kernel: vgaarb: loaded May 13 00:35:29.751586 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:35:29.751593 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:35:29.751599 kernel: PTP clock support registered May 13 00:35:29.751606 kernel: Registered efivars operations May 13 00:35:29.751614 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:35:29.751621 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:35:29.751627 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:35:29.751634 kernel: pnp: PnP ACPI init May 13 00:35:29.751711 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:35:29.751722 kernel: pnp: PnP ACPI: found 1 devices May 13 00:35:29.751728 kernel: NET: Registered PF_INET protocol family May 13 00:35:29.751735 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:35:29.751744 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:35:29.751751 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:35:29.751758 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:35:29.751765 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:35:29.751771 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:35:29.751778 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:35:29.751785 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:35:29.751792 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:35:29.751798 kernel: PCI: CLS 0 bytes, default 64 May 13 00:35:29.751806 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:35:29.751813 kernel: kvm [1]: HYP mode not available May 13 00:35:29.751820 kernel: Initialise system trusted keyrings May 13 00:35:29.751826 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:35:29.751833 kernel: Key type asymmetric registered May 13 00:35:29.751840 kernel: Asymmetric key parser 'x509' registered May 13 00:35:29.751846 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:35:29.751853 kernel: io scheduler mq-deadline registered May 13 00:35:29.751860 kernel: io scheduler kyber registered May 13 00:35:29.751868 kernel: io scheduler bfq registered May 13 00:35:29.751874 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:35:29.751881 kernel: ACPI: button: Power Button [PWRB] May 13 00:35:29.751888 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:35:29.751953 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:35:29.751962 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:35:29.751969 kernel: thunder_xcv, ver 1.0 May 13 00:35:29.751975 kernel: thunder_bgx, ver 1.0 May 13 00:35:29.751982 kernel: nicpf, ver 1.0 May 13 00:35:29.751990 kernel: nicvf, ver 1.0 May 13 00:35:29.752063 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:35:29.752122 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:35:29 UTC (1747096529) May 13 00:35:29.752131 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:35:29.752138 kernel: NET: Registered PF_INET6 protocol family May 13 00:35:29.752144 kernel: Segment Routing with IPv6 May 13 00:35:29.752151 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:35:29.752158 kernel: NET: Registered PF_PACKET protocol family May 13 00:35:29.752166 kernel: Key type dns_resolver registered May 13 00:35:29.752173 kernel: registered taskstats version 1 May 13 00:35:29.752179 kernel: Loading compiled-in X.509 certificates May 13 00:35:29.752186 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: d291b704d59536a3c0ba96fd6f5a99459de8de99' May 13 00:35:29.752193 kernel: Key type .fscrypt registered May 13 00:35:29.752199 kernel: Key type fscrypt-provisioning registered May 13 00:35:29.752206 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:35:29.752212 kernel: ima: Allocated hash algorithm: sha1 May 13 00:35:29.752219 kernel: ima: No architecture policies found May 13 00:35:29.752227 kernel: clk: Disabling unused clocks May 13 00:35:29.752234 kernel: Freeing unused kernel memory: 36480K May 13 00:35:29.752240 kernel: Run /init as init process May 13 00:35:29.752247 kernel: with arguments: May 13 00:35:29.752253 kernel: /init May 13 00:35:29.752260 kernel: with environment: May 13 00:35:29.752266 kernel: HOME=/ May 13 00:35:29.752273 kernel: TERM=linux May 13 00:35:29.752279 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:35:29.752289 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:35:29.752304 systemd[1]: Detected virtualization kvm. May 13 00:35:29.752311 systemd[1]: Detected architecture arm64. May 13 00:35:29.752318 systemd[1]: Running in initrd. May 13 00:35:29.752325 systemd[1]: No hostname configured, using default hostname. May 13 00:35:29.752332 systemd[1]: Hostname set to . May 13 00:35:29.752340 systemd[1]: Initializing machine ID from VM UUID. May 13 00:35:29.752348 systemd[1]: Queued start job for default target initrd.target. May 13 00:35:29.752356 systemd[1]: Started systemd-ask-password-console.path. May 13 00:35:29.752363 systemd[1]: Reached target cryptsetup.target. May 13 00:35:29.752370 systemd[1]: Reached target paths.target. May 13 00:35:29.752377 systemd[1]: Reached target slices.target. May 13 00:35:29.752383 systemd[1]: Reached target swap.target. May 13 00:35:29.752391 systemd[1]: Reached target timers.target. May 13 00:35:29.752398 systemd[1]: Listening on iscsid.socket. May 13 00:35:29.752406 systemd[1]: Listening on iscsiuio.socket. May 13 00:35:29.752413 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:35:29.752420 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:35:29.752428 systemd[1]: Listening on systemd-journald.socket. May 13 00:35:29.752435 systemd[1]: Listening on systemd-networkd.socket. May 13 00:35:29.752442 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:35:29.752450 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:35:29.752457 systemd[1]: Reached target sockets.target. May 13 00:35:29.752465 systemd[1]: Starting kmod-static-nodes.service... May 13 00:35:29.752472 systemd[1]: Finished network-cleanup.service. May 13 00:35:29.752480 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:35:29.752487 systemd[1]: Starting systemd-journald.service... May 13 00:35:29.752494 systemd[1]: Starting systemd-modules-load.service... May 13 00:35:29.752501 systemd[1]: Starting systemd-resolved.service... May 13 00:35:29.752508 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:35:29.752515 systemd[1]: Finished kmod-static-nodes.service. May 13 00:35:29.752522 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:35:29.752531 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:35:29.752538 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:35:29.752545 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:35:29.752552 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:35:29.752560 kernel: audit: type=1130 audit(1747096529.750:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.752570 systemd-journald[291]: Journal started May 13 00:35:29.752612 systemd-journald[291]: Runtime Journal (/run/log/journal/0eeb2cbfcbf344a984b439dd86e53781) is 6.0M, max 48.7M, 42.6M free. May 13 00:35:29.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.737641 systemd-modules-load[292]: Inserted module 'overlay' May 13 00:35:29.754317 systemd[1]: Started systemd-journald.service. May 13 00:35:29.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.758723 kernel: audit: type=1130 audit(1747096529.754:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.768931 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:35:29.776164 kernel: audit: type=1130 audit(1747096529.769:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.776189 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:35:29.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.770197 systemd-resolved[293]: Positive Trust Anchors: May 13 00:35:29.770204 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:35:29.770239 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:35:29.770618 systemd[1]: Starting dracut-cmdline.service... May 13 00:35:29.789493 kernel: Bridge firewalling registered May 13 00:35:29.789514 kernel: audit: type=1130 audit(1747096529.786:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.789557 dracut-cmdline[307]: dracut-dracut-053 May 13 00:35:29.777477 systemd-resolved[293]: Defaulting to hostname 'linux'. May 13 00:35:29.791634 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ae60136413c5686d5b1e9c38408a367f831e354d706496e9f743f02289aad53d May 13 00:35:29.785201 systemd-modules-load[292]: Inserted module 'br_netfilter' May 13 00:35:29.785392 systemd[1]: Started systemd-resolved.service. May 13 00:35:29.786480 systemd[1]: Reached target nss-lookup.target. May 13 00:35:29.804712 kernel: SCSI subsystem initialized May 13 00:35:29.814547 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:35:29.814587 kernel: device-mapper: uevent: version 1.0.3 May 13 00:35:29.814597 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:35:29.817966 systemd-modules-load[292]: Inserted module 'dm_multipath' May 13 00:35:29.818824 systemd[1]: Finished systemd-modules-load.service. May 13 00:35:29.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.824120 systemd[1]: Starting systemd-sysctl.service... May 13 00:35:29.825718 kernel: audit: type=1130 audit(1747096529.819:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.832647 systemd[1]: Finished systemd-sysctl.service. May 13 00:35:29.837028 kernel: audit: type=1130 audit(1747096529.832:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.864717 kernel: Loading iSCSI transport class v2.0-870. May 13 00:35:29.878725 kernel: iscsi: registered transport (tcp) May 13 00:35:29.896724 kernel: iscsi: registered transport (qla4xxx) May 13 00:35:29.896765 kernel: QLogic iSCSI HBA Driver May 13 00:35:29.935008 systemd[1]: Finished dracut-cmdline.service. May 13 00:35:29.936707 systemd[1]: Starting dracut-pre-udev.service... May 13 00:35:29.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.940740 kernel: audit: type=1130 audit(1747096529.935:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:29.981731 kernel: raid6: neonx8 gen() 10551 MB/s May 13 00:35:29.998781 kernel: raid6: neonx8 xor() 10416 MB/s May 13 00:35:30.015719 kernel: raid6: neonx4 gen() 12944 MB/s May 13 00:35:30.032714 kernel: raid6: neonx4 xor() 11017 MB/s May 13 00:35:30.049716 kernel: raid6: neonx2 gen() 12962 MB/s May 13 00:35:30.066724 kernel: raid6: neonx2 xor() 10187 MB/s May 13 00:35:30.083714 kernel: raid6: neonx1 gen() 10476 MB/s May 13 00:35:30.100719 kernel: raid6: neonx1 xor() 8770 MB/s May 13 00:35:30.117714 kernel: raid6: int64x8 gen() 6241 MB/s May 13 00:35:30.134717 kernel: raid6: int64x8 xor() 3528 MB/s May 13 00:35:30.151722 kernel: raid6: int64x4 gen() 7183 MB/s May 13 00:35:30.168714 kernel: raid6: int64x4 xor() 3837 MB/s May 13 00:35:30.185715 kernel: raid6: int64x2 gen() 6142 MB/s May 13 00:35:30.202715 kernel: raid6: int64x2 xor() 3315 MB/s May 13 00:35:30.219715 kernel: raid6: int64x1 gen() 5036 MB/s May 13 00:35:30.236815 kernel: raid6: int64x1 xor() 2640 MB/s May 13 00:35:30.236826 kernel: raid6: using algorithm neonx2 gen() 12962 MB/s May 13 00:35:30.236835 kernel: raid6: .... xor() 10187 MB/s, rmw enabled May 13 00:35:30.237884 kernel: raid6: using neon recovery algorithm May 13 00:35:30.249139 kernel: xor: measuring software checksum speed May 13 00:35:30.249159 kernel: 8regs : 17213 MB/sec May 13 00:35:30.249168 kernel: 32regs : 20707 MB/sec May 13 00:35:30.249799 kernel: arm64_neon : 27729 MB/sec May 13 00:35:30.249809 kernel: xor: using function: arm64_neon (27729 MB/sec) May 13 00:35:30.304717 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 13 00:35:30.315400 systemd[1]: Finished dracut-pre-udev.service. May 13 00:35:30.319918 kernel: audit: type=1130 audit(1747096530.316:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:30.319940 kernel: audit: type=1334 audit(1747096530.316:10): prog-id=7 op=LOAD May 13 00:35:30.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:30.316000 audit: BPF prog-id=7 op=LOAD May 13 00:35:30.318000 audit: BPF prog-id=8 op=LOAD May 13 00:35:30.319490 systemd[1]: Starting systemd-udevd.service... May 13 00:35:30.332576 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 13 00:35:30.336238 systemd[1]: Started systemd-udevd.service. May 13 00:35:30.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:30.337889 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:35:30.352674 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation May 13 00:35:30.380720 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:35:30.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:30.382372 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:35:30.415942 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:35:30.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:30.453373 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:35:30.460391 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:35:30.460413 kernel: GPT:9289727 != 19775487 May 13 00:35:30.460423 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:35:30.460433 kernel: GPT:9289727 != 19775487 May 13 00:35:30.460443 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:35:30.460451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:35:30.470718 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:35:30.471862 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:35:30.476717 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (550) May 13 00:35:30.480226 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:35:30.486231 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:35:30.491789 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:35:30.493600 systemd[1]: Starting disk-uuid.service... May 13 00:35:30.499350 disk-uuid[563]: Primary Header is updated. May 13 00:35:30.499350 disk-uuid[563]: Secondary Entries is updated. May 13 00:35:30.499350 disk-uuid[563]: Secondary Header is updated. May 13 00:35:30.502714 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:35:31.523362 disk-uuid[564]: The operation has completed successfully. May 13 00:35:31.524473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:35:31.552068 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:35:31.552853 systemd[1]: Finished disk-uuid.service. May 13 00:35:31.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.556858 systemd[1]: Starting verity-setup.service... May 13 00:35:31.575728 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:35:31.611429 systemd[1]: Found device dev-mapper-usr.device. May 13 00:35:31.614353 systemd[1]: Mounting sysusr-usr.mount... May 13 00:35:31.616300 systemd[1]: Finished verity-setup.service. May 13 00:35:31.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.693719 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:35:31.693741 systemd[1]: Mounted sysusr-usr.mount. May 13 00:35:31.694550 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:35:31.695313 systemd[1]: Starting ignition-setup.service... May 13 00:35:31.697649 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:35:31.708142 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:35:31.708223 kernel: BTRFS info (device vda6): using free space tree May 13 00:35:31.708271 kernel: BTRFS info (device vda6): has skinny extents May 13 00:35:31.716049 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:35:31.726332 systemd[1]: Finished ignition-setup.service. May 13 00:35:31.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.727993 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:35:31.787093 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:35:31.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.787000 audit: BPF prog-id=9 op=LOAD May 13 00:35:31.789227 systemd[1]: Starting systemd-networkd.service... May 13 00:35:31.811415 systemd-networkd[741]: lo: Link UP May 13 00:35:31.811428 systemd-networkd[741]: lo: Gained carrier May 13 00:35:31.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.811969 systemd-networkd[741]: Enumeration completed May 13 00:35:31.812072 systemd[1]: Started systemd-networkd.service. May 13 00:35:31.812154 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:35:31.813290 systemd-networkd[741]: eth0: Link UP May 13 00:35:31.813294 systemd-networkd[741]: eth0: Gained carrier May 13 00:35:31.814863 systemd[1]: Reached target network.target. May 13 00:35:31.816836 systemd[1]: Starting iscsiuio.service... May 13 00:35:31.826996 systemd[1]: Started iscsiuio.service. May 13 00:35:31.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.828783 systemd[1]: Starting iscsid.service... May 13 00:35:31.832813 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:35:31.832813 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:35:31.832813 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:35:31.832813 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:35:31.832813 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:35:31.832813 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:35:31.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.834179 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:35:31.836350 systemd[1]: Started iscsid.service. May 13 00:35:31.841364 systemd[1]: Starting dracut-initqueue.service... May 13 00:35:31.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.852503 systemd[1]: Finished dracut-initqueue.service. May 13 00:35:31.856617 ignition[660]: Ignition 2.14.0 May 13 00:35:31.853495 systemd[1]: Reached target remote-fs-pre.target. May 13 00:35:31.856624 ignition[660]: Stage: fetch-offline May 13 00:35:31.854334 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:35:31.856667 ignition[660]: no configs at "/usr/lib/ignition/base.d" May 13 00:35:31.855165 systemd[1]: Reached target remote-fs.target. May 13 00:35:31.856676 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:31.856968 systemd[1]: Starting dracut-pre-mount.service... May 13 00:35:31.856873 ignition[660]: parsed url from cmdline: "" May 13 00:35:31.856877 ignition[660]: no config URL provided May 13 00:35:31.856881 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:35:31.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.865134 systemd[1]: Finished dracut-pre-mount.service. May 13 00:35:31.856889 ignition[660]: no config at "/usr/lib/ignition/user.ign" May 13 00:35:31.856908 ignition[660]: op(1): [started] loading QEMU firmware config module May 13 00:35:31.856913 ignition[660]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:35:31.866001 ignition[660]: op(1): [finished] loading QEMU firmware config module May 13 00:35:31.905668 ignition[660]: parsing config with SHA512: e42cfd412419ec442e4ba63a9d25e8d62bfcbec9890c5f64f6eeace64f6aab5fb23857f21e7e5c6b691308f8a892716576de06489790af7a9a6fff2108e86e68 May 13 00:35:31.912888 unknown[660]: fetched base config from "system" May 13 00:35:31.912898 unknown[660]: fetched user config from "qemu" May 13 00:35:31.913433 ignition[660]: fetch-offline: fetch-offline passed May 13 00:35:31.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.914564 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:35:31.913490 ignition[660]: Ignition finished successfully May 13 00:35:31.916071 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:35:31.916883 systemd[1]: Starting ignition-kargs.service... May 13 00:35:31.925603 ignition[763]: Ignition 2.14.0 May 13 00:35:31.925612 ignition[763]: Stage: kargs May 13 00:35:31.925725 ignition[763]: no configs at "/usr/lib/ignition/base.d" May 13 00:35:31.927818 systemd[1]: Finished ignition-kargs.service. May 13 00:35:31.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.925735 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:31.926634 ignition[763]: kargs: kargs passed May 13 00:35:31.930124 systemd[1]: Starting ignition-disks.service... May 13 00:35:31.926676 ignition[763]: Ignition finished successfully May 13 00:35:31.936818 ignition[769]: Ignition 2.14.0 May 13 00:35:31.936829 ignition[769]: Stage: disks May 13 00:35:31.936916 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 13 00:35:31.936926 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:31.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.938676 systemd[1]: Finished ignition-disks.service. May 13 00:35:31.937856 ignition[769]: disks: disks passed May 13 00:35:31.939638 systemd[1]: Reached target initrd-root-device.target. May 13 00:35:31.937901 ignition[769]: Ignition finished successfully May 13 00:35:31.941195 systemd[1]: Reached target local-fs-pre.target. May 13 00:35:31.942438 systemd[1]: Reached target local-fs.target. May 13 00:35:31.943605 systemd[1]: Reached target sysinit.target. May 13 00:35:31.944914 systemd[1]: Reached target basic.target. May 13 00:35:31.947039 systemd[1]: Starting systemd-fsck-root.service... May 13 00:35:31.959988 systemd-fsck[777]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 13 00:35:31.963979 systemd[1]: Finished systemd-fsck-root.service. May 13 00:35:31.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:31.965593 systemd[1]: Mounting sysroot.mount... May 13 00:35:31.974708 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:35:31.975047 systemd[1]: Mounted sysroot.mount. May 13 00:35:31.975848 systemd[1]: Reached target initrd-root-fs.target. May 13 00:35:31.978052 systemd[1]: Mounting sysroot-usr.mount... May 13 00:35:31.978934 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:35:31.978973 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:35:31.978997 systemd[1]: Reached target ignition-diskful.target. May 13 00:35:31.980909 systemd[1]: Mounted sysroot-usr.mount. May 13 00:35:31.982482 systemd[1]: Starting initrd-setup-root.service... May 13 00:35:31.986959 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:35:31.991983 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory May 13 00:35:31.996412 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:35:32.000771 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:35:32.034511 systemd[1]: Finished initrd-setup-root.service. May 13 00:35:32.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:32.036128 systemd[1]: Starting ignition-mount.service... May 13 00:35:32.037418 systemd[1]: Starting sysroot-boot.service... May 13 00:35:32.041776 bash[828]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:35:32.049829 ignition[830]: INFO : Ignition 2.14.0 May 13 00:35:32.049829 ignition[830]: INFO : Stage: mount May 13 00:35:32.052120 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:35:32.052120 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:32.052120 ignition[830]: INFO : mount: mount passed May 13 00:35:32.052120 ignition[830]: INFO : Ignition finished successfully May 13 00:35:32.055012 systemd[1]: Finished ignition-mount.service. May 13 00:35:32.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:32.057687 systemd[1]: Finished sysroot-boot.service. May 13 00:35:32.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:32.625196 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:35:32.630712 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (838) May 13 00:35:32.633180 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:35:32.633203 kernel: BTRFS info (device vda6): using free space tree May 13 00:35:32.633212 kernel: BTRFS info (device vda6): has skinny extents May 13 00:35:32.635828 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:35:32.637351 systemd[1]: Starting ignition-files.service... May 13 00:35:32.651053 ignition[858]: INFO : Ignition 2.14.0 May 13 00:35:32.651053 ignition[858]: INFO : Stage: files May 13 00:35:32.652591 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:35:32.652591 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:32.652591 ignition[858]: DEBUG : files: compiled without relabeling support, skipping May 13 00:35:32.657032 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:35:32.657032 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:35:32.664167 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:35:32.665492 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:35:32.665492 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:35:32.665492 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:35:32.665492 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 00:35:32.664847 unknown[858]: wrote ssh authorized keys file for user: core May 13 00:35:32.724630 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:35:32.858195 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:35:32.860204 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:35:32.860204 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 00:35:32.892819 systemd-networkd[741]: eth0: Gained IPv6LL May 13 00:35:33.167839 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:35:33.291853 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:35:33.293517 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 00:35:33.519044 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:35:33.800390 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:35:33.800390 ignition[858]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 00:35:33.804437 ignition[858]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:35:33.804437 ignition[858]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:35:33.804437 ignition[858]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 00:35:33.804437 ignition[858]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 00:35:33.804437 ignition[858]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:35:33.804437 ignition[858]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:35:33.804437 ignition[858]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 00:35:33.804437 ignition[858]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 13 00:35:33.804437 ignition[858]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:35:33.804437 ignition[858]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:35:33.804437 ignition[858]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:35:33.852134 ignition[858]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:35:33.854806 ignition[858]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:35:33.854806 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:35:33.854806 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:35:33.854806 ignition[858]: INFO : files: files passed May 13 00:35:33.854806 ignition[858]: INFO : Ignition finished successfully May 13 00:35:33.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.855225 systemd[1]: Finished ignition-files.service. May 13 00:35:33.861762 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:35:33.863462 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:35:33.865740 systemd[1]: Starting ignition-quench.service... May 13 00:35:33.869201 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:35:33.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.869303 systemd[1]: Finished ignition-quench.service. May 13 00:35:33.871756 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:35:33.873367 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:35:33.874589 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:35:33.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.877192 systemd[1]: Reached target ignition-complete.target. May 13 00:35:33.879452 systemd[1]: Starting initrd-parse-etc.service... May 13 00:35:33.892414 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:35:33.892516 systemd[1]: Finished initrd-parse-etc.service. May 13 00:35:33.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.894370 systemd[1]: Reached target initrd-fs.target. May 13 00:35:33.895555 systemd[1]: Reached target initrd.target. May 13 00:35:33.896880 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:35:33.897690 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:35:33.909215 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:35:33.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.910925 systemd[1]: Starting initrd-cleanup.service... May 13 00:35:33.919382 systemd[1]: Stopped target nss-lookup.target. May 13 00:35:33.920279 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:35:33.921768 systemd[1]: Stopped target timers.target. May 13 00:35:33.923212 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:35:33.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.923332 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:35:33.924680 systemd[1]: Stopped target initrd.target. May 13 00:35:33.926073 systemd[1]: Stopped target basic.target. May 13 00:35:33.927803 systemd[1]: Stopped target ignition-complete.target. May 13 00:35:33.930968 systemd[1]: Stopped target ignition-diskful.target. May 13 00:35:33.932356 systemd[1]: Stopped target initrd-root-device.target. May 13 00:35:33.933770 systemd[1]: Stopped target remote-fs.target. May 13 00:35:33.935173 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:35:33.936585 systemd[1]: Stopped target sysinit.target. May 13 00:35:33.937853 systemd[1]: Stopped target local-fs.target. May 13 00:35:33.939165 systemd[1]: Stopped target local-fs-pre.target. May 13 00:35:33.940472 systemd[1]: Stopped target swap.target. May 13 00:35:33.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.941668 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:35:33.941806 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:35:33.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.943170 systemd[1]: Stopped target cryptsetup.target. May 13 00:35:33.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.944332 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:35:33.944438 systemd[1]: Stopped dracut-initqueue.service. May 13 00:35:33.946121 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:35:33.946220 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:35:33.947584 systemd[1]: Stopped target paths.target. May 13 00:35:33.948827 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:35:33.952765 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:35:33.953673 systemd[1]: Stopped target slices.target. May 13 00:35:33.955247 systemd[1]: Stopped target sockets.target. May 13 00:35:33.956596 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:35:33.956670 systemd[1]: Closed iscsid.socket. May 13 00:35:33.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.957757 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:35:33.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.957828 systemd[1]: Closed iscsiuio.socket. May 13 00:35:33.958967 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:35:33.959068 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:35:33.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.960516 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:35:33.960609 systemd[1]: Stopped ignition-files.service. May 13 00:35:33.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.969584 ignition[898]: INFO : Ignition 2.14.0 May 13 00:35:33.969584 ignition[898]: INFO : Stage: umount May 13 00:35:33.969584 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:35:33.969584 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:35:33.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.962535 systemd[1]: Stopping ignition-mount.service... May 13 00:35:33.975550 ignition[898]: INFO : umount: umount passed May 13 00:35:33.975550 ignition[898]: INFO : Ignition finished successfully May 13 00:35:33.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.963500 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:35:33.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.963626 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:35:33.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.966130 systemd[1]: Stopping sysroot-boot.service... May 13 00:35:33.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.967067 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:35:33.967202 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:35:33.968994 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:35:33.969082 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:35:33.972931 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:35:34.002832 kernel: kauditd_printk_skb: 44 callbacks suppressed May 13 00:35:34.002857 kernel: audit: type=1131 audit(1747096533.995:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.973022 systemd[1]: Finished initrd-cleanup.service. May 13 00:35:33.975116 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:35:34.009166 kernel: audit: type=1131 audit(1747096534.002:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:34.009189 kernel: audit: type=1334 audit(1747096534.004:57): prog-id=6 op=UNLOAD May 13 00:35:34.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:34.004000 audit: BPF prog-id=6 op=UNLOAD May 13 00:35:33.975197 systemd[1]: Stopped ignition-mount.service. May 13 00:35:34.015800 kernel: audit: type=1131 audit(1747096534.010:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:34.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.977319 systemd[1]: Stopped target network.target. May 13 00:35:34.021140 kernel: audit: type=1131 audit(1747096534.015:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:34.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.978220 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:35:33.978279 systemd[1]: Stopped ignition-disks.service. May 13 00:35:33.980386 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:35:34.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.980427 systemd[1]: Stopped ignition-kargs.service. May 13 00:35:33.982349 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:35:34.032996 kernel: audit: type=1131 audit(1747096534.021:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.982388 systemd[1]: Stopped ignition-setup.service. May 13 00:35:34.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.983960 systemd[1]: Stopping systemd-networkd.service... May 13 00:35:34.043039 kernel: audit: type=1131 audit(1747096534.035:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.987238 systemd[1]: Stopping systemd-resolved.service... May 13 00:35:34.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.990062 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:35:34.051168 kernel: audit: type=1131 audit(1747096534.044:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.991980 systemd-networkd[741]: eth0: DHCPv6 lease lost May 13 00:35:34.051000 audit: BPF prog-id=9 op=UNLOAD May 13 00:35:33.994056 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:35:34.057647 kernel: audit: type=1334 audit(1747096534.051:63): prog-id=9 op=UNLOAD May 13 00:35:34.057671 kernel: audit: type=1131 audit(1747096534.054:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:34.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.994145 systemd[1]: Stopped systemd-networkd.service. May 13 00:35:34.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.997002 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:35:34.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:33.997091 systemd[1]: Stopped systemd-resolved.service. May 13 00:35:34.004093 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:35:34.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:34.004123 systemd[1]: Closed systemd-networkd.socket. May 13 00:35:34.009129 systemd[1]: Stopping network-cleanup.service... May 13 00:35:34.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:34.009956 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:35:34.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:34.010011 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:35:34.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:34.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:34.010902 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:35:34.010944 systemd[1]: Stopped systemd-sysctl.service. May 13 00:35:34.020105 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:35:34.020153 systemd[1]: Stopped systemd-modules-load.service. May 13 00:35:34.022102 systemd[1]: Stopping systemd-udevd.service... May 13 00:35:34.026549 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:35:34.031977 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:35:34.032092 systemd[1]: Stopped network-cleanup.service. May 13 00:35:34.042709 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:35:34.042842 systemd[1]: Stopped systemd-udevd.service. May 13 00:35:34.046112 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:35:34.046152 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:35:34.050332 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:35:34.050374 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:35:34.052984 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:35:34.053034 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:35:34.054615 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:35:34.054657 systemd[1]: Stopped dracut-cmdline.service. May 13 00:35:34.058491 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:35:34.058535 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:35:34.062073 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:35:34.062845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:35:34.062899 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:35:34.065185 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:35:34.065661 systemd[1]: Stopped sysroot-boot.service. May 13 00:35:34.096222 systemd-journald[291]: Received SIGTERM from PID 1 (systemd). May 13 00:35:34.096255 iscsid[747]: iscsid shutting down. May 13 00:35:34.066803 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:35:34.066843 systemd[1]: Stopped initrd-setup-root.service. May 13 00:35:34.068329 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:35:34.068408 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:35:34.069998 systemd[1]: Reached target initrd-switch-root.target. May 13 00:35:34.072051 systemd[1]: Starting initrd-switch-root.service... May 13 00:35:34.078359 systemd[1]: Switching root. May 13 00:35:34.102709 systemd-journald[291]: Journal stopped May 13 00:35:36.255581 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:35:36.255635 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:35:36.255647 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:35:36.255657 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:35:36.255672 kernel: SELinux: policy capability open_perms=1 May 13 00:35:36.255682 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:35:36.255712 kernel: SELinux: policy capability always_check_network=0 May 13 00:35:36.255726 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:35:36.255737 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:35:36.255751 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:35:36.255762 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:35:36.255777 systemd[1]: Successfully loaded SELinux policy in 34.676ms. May 13 00:35:36.255801 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.362ms. May 13 00:35:36.255813 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:35:36.255824 systemd[1]: Detected virtualization kvm. May 13 00:35:36.255837 systemd[1]: Detected architecture arm64. May 13 00:35:36.255848 systemd[1]: Detected first boot. May 13 00:35:36.255860 systemd[1]: Initializing machine ID from VM UUID. May 13 00:35:36.255871 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:35:36.255882 systemd[1]: Populated /etc with preset unit settings. May 13 00:35:36.255894 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:35:36.255906 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:35:36.255918 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:35:36.255930 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:35:36.255944 systemd[1]: Stopped iscsiuio.service. May 13 00:35:36.255955 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:35:36.255967 systemd[1]: Stopped iscsid.service. May 13 00:35:36.255978 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:35:36.255989 systemd[1]: Stopped initrd-switch-root.service. May 13 00:35:36.256002 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:35:36.256014 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:35:36.256024 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:35:36.256034 systemd[1]: Created slice system-getty.slice. May 13 00:35:36.256045 systemd[1]: Created slice system-modprobe.slice. May 13 00:35:36.256056 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:35:36.256066 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:35:36.256077 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:35:36.256087 systemd[1]: Created slice user.slice. May 13 00:35:36.256098 systemd[1]: Started systemd-ask-password-console.path. May 13 00:35:36.256109 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:35:36.256120 systemd[1]: Set up automount boot.automount. May 13 00:35:36.256130 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:35:36.256141 systemd[1]: Stopped target initrd-switch-root.target. May 13 00:35:36.256152 systemd[1]: Stopped target initrd-fs.target. May 13 00:35:36.256163 systemd[1]: Stopped target initrd-root-fs.target. May 13 00:35:36.256175 systemd[1]: Reached target integritysetup.target. May 13 00:35:36.256185 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:35:36.256196 systemd[1]: Reached target remote-fs.target. May 13 00:35:36.256207 systemd[1]: Reached target slices.target. May 13 00:35:36.256217 systemd[1]: Reached target swap.target. May 13 00:35:36.256227 systemd[1]: Reached target torcx.target. May 13 00:35:36.256238 systemd[1]: Reached target veritysetup.target. May 13 00:35:36.256248 systemd[1]: Listening on systemd-coredump.socket. May 13 00:35:36.256264 systemd[1]: Listening on systemd-initctl.socket. May 13 00:35:36.256276 systemd[1]: Listening on systemd-networkd.socket. May 13 00:35:36.256287 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:35:36.256298 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:35:36.256309 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:35:36.256320 systemd[1]: Mounting dev-hugepages.mount... May 13 00:35:36.256330 systemd[1]: Mounting dev-mqueue.mount... May 13 00:35:36.256341 systemd[1]: Mounting media.mount... May 13 00:35:36.256351 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:35:36.256362 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:35:36.256373 systemd[1]: Mounting tmp.mount... May 13 00:35:36.256385 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:35:36.256396 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:35:36.256406 systemd[1]: Starting kmod-static-nodes.service... May 13 00:35:36.256416 systemd[1]: Starting modprobe@configfs.service... May 13 00:35:36.256427 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:35:36.256439 systemd[1]: Starting modprobe@drm.service... May 13 00:35:36.256449 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:35:36.256460 systemd[1]: Starting modprobe@fuse.service... May 13 00:35:36.256470 systemd[1]: Starting modprobe@loop.service... May 13 00:35:36.256482 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:35:36.256493 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:35:36.256503 systemd[1]: Stopped systemd-fsck-root.service. May 13 00:35:36.256513 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:35:36.256524 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:35:36.256534 systemd[1]: Stopped systemd-journald.service. May 13 00:35:36.256544 kernel: fuse: init (API version 7.34) May 13 00:35:36.256554 systemd[1]: Starting systemd-journald.service... May 13 00:35:36.256565 systemd[1]: Starting systemd-modules-load.service... May 13 00:35:36.256577 systemd[1]: Starting systemd-network-generator.service... May 13 00:35:36.256587 kernel: loop: module loaded May 13 00:35:36.256610 systemd[1]: Starting systemd-remount-fs.service... May 13 00:35:36.256621 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:35:36.256632 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:35:36.256643 systemd[1]: Stopped verity-setup.service. May 13 00:35:36.256653 systemd[1]: Mounted dev-hugepages.mount. May 13 00:35:36.256663 systemd[1]: Mounted dev-mqueue.mount. May 13 00:35:36.256673 systemd[1]: Mounted media.mount. May 13 00:35:36.256684 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:35:36.256703 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:35:36.256714 systemd[1]: Mounted tmp.mount. May 13 00:35:36.256728 systemd[1]: Finished kmod-static-nodes.service. May 13 00:35:36.256739 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:35:36.256749 systemd[1]: Finished modprobe@configfs.service. May 13 00:35:36.256761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:35:36.256771 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:35:36.256782 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:35:36.256792 systemd[1]: Finished modprobe@drm.service. May 13 00:35:36.256802 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:35:36.256813 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:35:36.256824 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:35:36.256834 systemd[1]: Finished modprobe@fuse.service. May 13 00:35:36.256861 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:35:36.256872 systemd[1]: Finished modprobe@loop.service. May 13 00:35:36.256883 systemd[1]: Finished systemd-modules-load.service. May 13 00:35:36.256896 systemd-journald[993]: Journal started May 13 00:35:36.256939 systemd-journald[993]: Runtime Journal (/run/log/journal/0eeb2cbfcbf344a984b439dd86e53781) is 6.0M, max 48.7M, 42.6M free. May 13 00:35:36.256970 systemd[1]: Finished systemd-network-generator.service. May 13 00:35:34.164000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:35:34.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:35:34.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:35:34.276000 audit: BPF prog-id=10 op=LOAD May 13 00:35:34.276000 audit: BPF prog-id=10 op=UNLOAD May 13 00:35:34.276000 audit: BPF prog-id=11 op=LOAD May 13 00:35:34.276000 audit: BPF prog-id=11 op=UNLOAD May 13 00:35:34.328000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 00:35:34.328000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:35:34.328000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:35:34.329000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 00:35:34.329000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:35:34.329000 audit: CWD cwd="/" May 13 00:35:34.329000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:35:34.329000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:35:34.329000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:35:36.075000 audit: BPF prog-id=12 op=LOAD May 13 00:35:36.075000 audit: BPF prog-id=3 op=UNLOAD May 13 00:35:36.075000 audit: BPF prog-id=13 op=LOAD May 13 00:35:36.075000 audit: BPF prog-id=14 op=LOAD May 13 00:35:36.075000 audit: BPF prog-id=4 op=UNLOAD May 13 00:35:36.075000 audit: BPF prog-id=5 op=UNLOAD May 13 00:35:36.076000 audit: BPF prog-id=15 op=LOAD May 13 00:35:36.076000 audit: BPF prog-id=12 op=UNLOAD May 13 00:35:36.076000 audit: BPF prog-id=16 op=LOAD May 13 00:35:36.076000 audit: BPF prog-id=17 op=LOAD May 13 00:35:36.076000 audit: BPF prog-id=13 op=UNLOAD May 13 00:35:36.077000 audit: BPF prog-id=14 op=UNLOAD May 13 00:35:36.078000 audit: BPF prog-id=18 op=LOAD May 13 00:35:36.078000 audit: BPF prog-id=15 op=UNLOAD May 13 00:35:36.078000 audit: BPF prog-id=19 op=LOAD May 13 00:35:36.078000 audit: BPF prog-id=20 op=LOAD May 13 00:35:36.078000 audit: BPF prog-id=16 op=UNLOAD May 13 00:35:36.078000 audit: BPF prog-id=17 op=UNLOAD May 13 00:35:36.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.095000 audit: BPF prog-id=18 op=UNLOAD May 13 00:35:36.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.200000 audit: BPF prog-id=21 op=LOAD May 13 00:35:36.200000 audit: BPF prog-id=22 op=LOAD May 13 00:35:36.201000 audit: BPF prog-id=23 op=LOAD May 13 00:35:36.201000 audit: BPF prog-id=19 op=UNLOAD May 13 00:35:36.201000 audit: BPF prog-id=20 op=UNLOAD May 13 00:35:36.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.253000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:35:36.253000 audit[993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffcbd477b0 a2=4000 a3=1 items=0 ppid=1 pid=993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:35:36.253000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:35:36.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.074941 systemd[1]: Queued start job for default target multi-user.target. May 13 00:35:34.326308 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:35:36.074953 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:35:36.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:34.326613 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:35:36.079228 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:35:34.326633 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:35:34.326664 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 13 00:35:34.326674 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=debug msg="skipped missing lower profile" missing profile=oem May 13 00:35:34.326723 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 13 00:35:34.326736 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 13 00:35:34.326936 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 13 00:35:34.326972 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:35:34.326984 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:35:34.328014 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 13 00:35:34.328053 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 13 00:35:34.328073 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 13 00:35:34.328087 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 13 00:35:34.328105 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 13 00:35:34.328123 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 13 00:35:35.815727 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:35:35.816031 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:35:35.816132 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:35:35.819991 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:35:35.820059 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 13 00:35:35.820121 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-13T00:35:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 13 00:35:36.260257 systemd[1]: Started systemd-journald.service. May 13 00:35:36.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.261152 systemd[1]: Finished systemd-remount-fs.service. May 13 00:35:36.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.262518 systemd[1]: Reached target network-pre.target. May 13 00:35:36.264949 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:35:36.266986 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:35:36.267789 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:35:36.270711 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:35:36.272583 systemd[1]: Starting systemd-journal-flush.service... May 13 00:35:36.273482 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:35:36.274705 systemd[1]: Starting systemd-random-seed.service... May 13 00:35:36.275572 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:35:36.278925 systemd[1]: Starting systemd-sysctl.service... May 13 00:35:36.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.281520 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:35:36.282758 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:35:36.283277 systemd-journald[993]: Time spent on flushing to /var/log/journal/0eeb2cbfcbf344a984b439dd86e53781 is 16.767ms for 1001 entries. May 13 00:35:36.283277 systemd-journald[993]: System Journal (/var/log/journal/0eeb2cbfcbf344a984b439dd86e53781) is 8.0M, max 195.6M, 187.6M free. May 13 00:35:36.311010 systemd-journald[993]: Received client request to flush runtime journal. May 13 00:35:36.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.285896 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:35:36.287000 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:35:36.314082 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:35:36.289625 systemd[1]: Starting systemd-sysusers.service... May 13 00:35:36.291688 systemd[1]: Starting systemd-udev-settle.service... May 13 00:35:36.294015 systemd[1]: Finished systemd-random-seed.service. May 13 00:35:36.295007 systemd[1]: Reached target first-boot-complete.target. May 13 00:35:36.296195 systemd[1]: Finished systemd-sysctl.service. May 13 00:35:36.313815 systemd[1]: Finished systemd-journal-flush.service. May 13 00:35:36.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.317509 systemd[1]: Finished systemd-sysusers.service. May 13 00:35:36.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.653710 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:35:36.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.654000 audit: BPF prog-id=24 op=LOAD May 13 00:35:36.654000 audit: BPF prog-id=25 op=LOAD May 13 00:35:36.654000 audit: BPF prog-id=7 op=UNLOAD May 13 00:35:36.654000 audit: BPF prog-id=8 op=UNLOAD May 13 00:35:36.656067 systemd[1]: Starting systemd-udevd.service... May 13 00:35:36.673759 systemd-udevd[1035]: Using default interface naming scheme 'v252'. May 13 00:35:36.690066 systemd[1]: Started systemd-udevd.service. May 13 00:35:36.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.691000 audit: BPF prog-id=26 op=LOAD May 13 00:35:36.693099 systemd[1]: Starting systemd-networkd.service... May 13 00:35:36.699000 audit: BPF prog-id=27 op=LOAD May 13 00:35:36.699000 audit: BPF prog-id=28 op=LOAD May 13 00:35:36.699000 audit: BPF prog-id=29 op=LOAD May 13 00:35:36.702610 systemd[1]: Starting systemd-userdbd.service... May 13 00:35:36.710156 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 13 00:35:36.736372 systemd[1]: Started systemd-userdbd.service. May 13 00:35:36.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.767886 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:35:36.802435 systemd[1]: Finished systemd-udev-settle.service. May 13 00:35:36.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.805450 systemd[1]: Starting lvm2-activation-early.service... May 13 00:35:36.807469 systemd-networkd[1043]: lo: Link UP May 13 00:35:36.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.807479 systemd-networkd[1043]: lo: Gained carrier May 13 00:35:36.807838 systemd-networkd[1043]: Enumeration completed May 13 00:35:36.807976 systemd-networkd[1043]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:35:36.808048 systemd[1]: Started systemd-networkd.service. May 13 00:35:36.815367 systemd-networkd[1043]: eth0: Link UP May 13 00:35:36.815375 systemd-networkd[1043]: eth0: Gained carrier May 13 00:35:36.818634 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:35:36.845875 systemd-networkd[1043]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:35:36.850668 systemd[1]: Finished lvm2-activation-early.service. May 13 00:35:36.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.851754 systemd[1]: Reached target cryptsetup.target. May 13 00:35:36.853859 systemd[1]: Starting lvm2-activation.service... May 13 00:35:36.857152 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:35:36.892622 systemd[1]: Finished lvm2-activation.service. May 13 00:35:36.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.893683 systemd[1]: Reached target local-fs-pre.target. May 13 00:35:36.894545 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:35:36.894579 systemd[1]: Reached target local-fs.target. May 13 00:35:36.895450 systemd[1]: Reached target machines.target. May 13 00:35:36.897572 systemd[1]: Starting ldconfig.service... May 13 00:35:36.898759 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:35:36.898827 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:36.899988 systemd[1]: Starting systemd-boot-update.service... May 13 00:35:36.901780 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:35:36.903850 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:35:36.905879 systemd[1]: Starting systemd-sysext.service... May 13 00:35:36.907112 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) May 13 00:35:36.908214 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:35:36.922179 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:35:36.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.928782 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:35:36.935253 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:35:36.935458 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:35:36.946807 kernel: loop0: detected capacity change from 0 to 194096 May 13 00:35:36.990117 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:35:36.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:36.996818 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:35:37.012999 systemd-fsck[1078]: fsck.fat 4.2 (2021-01-31) May 13 00:35:37.012999 systemd-fsck[1078]: /dev/vda1: 236 files, 117310/258078 clusters May 13 00:35:37.015657 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:35:37.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.022817 kernel: loop1: detected capacity change from 0 to 194096 May 13 00:35:37.030161 (sd-sysext)[1083]: Using extensions 'kubernetes'. May 13 00:35:37.030531 (sd-sysext)[1083]: Merged extensions into '/usr'. May 13 00:35:37.046336 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:35:37.047639 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:35:37.049756 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:35:37.051767 systemd[1]: Starting modprobe@loop.service... May 13 00:35:37.052756 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:35:37.052891 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:37.053658 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:35:37.053804 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:35:37.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.055170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:35:37.055297 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:35:37.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.056783 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:35:37.056915 systemd[1]: Finished modprobe@loop.service. May 13 00:35:37.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.058353 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:35:37.058480 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:35:37.138909 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:35:37.143038 systemd[1]: Finished ldconfig.service. May 13 00:35:37.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.222382 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:35:37.224091 systemd[1]: Mounting boot.mount... May 13 00:35:37.225884 systemd[1]: Mounting usr-share-oem.mount... May 13 00:35:37.230328 systemd[1]: Mounted usr-share-oem.mount. May 13 00:35:37.232756 systemd[1]: Finished systemd-sysext.service. May 13 00:35:37.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.233640 systemd[1]: Mounted boot.mount. May 13 00:35:37.236085 systemd[1]: Starting ensure-sysext.service... May 13 00:35:37.237927 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:35:37.242477 systemd[1]: Reloading. May 13 00:35:37.253888 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:35:37.257182 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:35:37.261175 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:35:37.277891 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-05-13T00:35:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:35:37.277920 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-05-13T00:35:37Z" level=info msg="torcx already run" May 13 00:35:37.339174 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:35:37.339196 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:35:37.354296 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:35:37.397000 audit: BPF prog-id=30 op=LOAD May 13 00:35:37.397000 audit: BPF prog-id=26 op=UNLOAD May 13 00:35:37.397000 audit: BPF prog-id=31 op=LOAD May 13 00:35:37.397000 audit: BPF prog-id=32 op=LOAD May 13 00:35:37.397000 audit: BPF prog-id=24 op=UNLOAD May 13 00:35:37.398000 audit: BPF prog-id=25 op=UNLOAD May 13 00:35:37.399000 audit: BPF prog-id=33 op=LOAD May 13 00:35:37.399000 audit: BPF prog-id=27 op=UNLOAD May 13 00:35:37.399000 audit: BPF prog-id=34 op=LOAD May 13 00:35:37.399000 audit: BPF prog-id=35 op=LOAD May 13 00:35:37.399000 audit: BPF prog-id=28 op=UNLOAD May 13 00:35:37.399000 audit: BPF prog-id=29 op=UNLOAD May 13 00:35:37.400000 audit: BPF prog-id=36 op=LOAD May 13 00:35:37.400000 audit: BPF prog-id=21 op=UNLOAD May 13 00:35:37.400000 audit: BPF prog-id=37 op=LOAD May 13 00:35:37.400000 audit: BPF prog-id=38 op=LOAD May 13 00:35:37.400000 audit: BPF prog-id=22 op=UNLOAD May 13 00:35:37.400000 audit: BPF prog-id=23 op=UNLOAD May 13 00:35:37.403575 systemd[1]: Finished systemd-boot-update.service. May 13 00:35:37.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.405838 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:35:37.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.409668 systemd[1]: Starting audit-rules.service... May 13 00:35:37.411810 systemd[1]: Starting clean-ca-certificates.service... May 13 00:35:37.413840 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:35:37.417000 audit: BPF prog-id=39 op=LOAD May 13 00:35:37.421000 audit: BPF prog-id=40 op=LOAD May 13 00:35:37.419373 systemd[1]: Starting systemd-resolved.service... May 13 00:35:37.423104 systemd[1]: Starting systemd-timesyncd.service... May 13 00:35:37.425442 systemd[1]: Starting systemd-update-utmp.service... May 13 00:35:37.429923 systemd[1]: Finished clean-ca-certificates.service. May 13 00:35:37.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.430000 audit[1155]: SYSTEM_BOOT pid=1155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:35:37.431335 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:35:37.432728 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:35:37.434662 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:35:37.436587 systemd[1]: Starting modprobe@loop.service... May 13 00:35:37.438119 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:35:37.438295 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:37.438415 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:35:37.439314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:35:37.439452 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:35:37.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.440655 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:35:37.440789 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:35:37.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.442013 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:35:37.442125 systemd[1]: Finished modprobe@loop.service. May 13 00:35:37.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.443376 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:35:37.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.448076 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:35:37.449280 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:35:37.451071 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:35:37.452950 systemd[1]: Starting modprobe@loop.service... May 13 00:35:37.453681 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:35:37.453833 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:37.455179 systemd[1]: Starting systemd-update-done.service... May 13 00:35:37.456373 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:35:37.457352 systemd[1]: Finished systemd-update-utmp.service. May 13 00:35:37.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.458613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:35:37.458754 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:35:37.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.459943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:35:37.460304 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:35:37.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.461594 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:35:37.461767 systemd[1]: Finished modprobe@loop.service. May 13 00:35:37.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.462956 systemd[1]: Finished systemd-update-done.service. May 13 00:35:37.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.467284 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:35:37.469905 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:35:37.471883 systemd[1]: Starting modprobe@drm.service... May 13 00:35:37.474539 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:35:37.476834 systemd[1]: Starting modprobe@loop.service... May 13 00:35:37.477655 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:35:37.477804 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:37.480116 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:35:37.481817 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:35:37.482871 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:35:37.483001 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:35:37.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.484515 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:35:37.484637 systemd[1]: Finished modprobe@drm.service. May 13 00:35:37.485867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:35:37.486001 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:35:37.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.489535 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:35:37.489679 systemd[1]: Finished modprobe@loop.service. May 13 00:35:37.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.491174 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:35:37.491278 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:35:37.492327 systemd[1]: Finished ensure-sysext.service. May 13 00:35:37.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.495146 systemd[1]: Started systemd-timesyncd.service. May 13 00:35:37.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:35:37.496633 systemd[1]: Reached target time-set.target. May 13 00:35:37.498101 systemd-timesyncd[1154]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:35:37.498395 systemd-timesyncd[1154]: Initial clock synchronization to Tue 2025-05-13 00:35:37.237353 UTC. May 13 00:35:37.498000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:35:37.498000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd6ba5700 a2=420 a3=0 items=0 ppid=1149 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:35:37.499805 augenrules[1181]: No rules May 13 00:35:37.498000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:35:37.500884 systemd[1]: Finished audit-rules.service. May 13 00:35:37.501417 systemd-resolved[1153]: Positive Trust Anchors: May 13 00:35:37.501429 systemd-resolved[1153]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:35:37.501455 systemd-resolved[1153]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:35:37.517232 systemd-resolved[1153]: Defaulting to hostname 'linux'. May 13 00:35:37.518633 systemd[1]: Started systemd-resolved.service. May 13 00:35:37.519567 systemd[1]: Reached target network.target. May 13 00:35:37.520375 systemd[1]: Reached target nss-lookup.target. May 13 00:35:37.521217 systemd[1]: Reached target sysinit.target. May 13 00:35:37.522075 systemd[1]: Started motdgen.path. May 13 00:35:37.522869 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:35:37.524091 systemd[1]: Started logrotate.timer. May 13 00:35:37.524935 systemd[1]: Started mdadm.timer. May 13 00:35:37.525595 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:35:37.526408 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:35:37.526442 systemd[1]: Reached target paths.target. May 13 00:35:37.527159 systemd[1]: Reached target timers.target. May 13 00:35:37.528244 systemd[1]: Listening on dbus.socket. May 13 00:35:37.530084 systemd[1]: Starting docker.socket... May 13 00:35:37.533336 systemd[1]: Listening on sshd.socket. May 13 00:35:37.534311 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:37.534823 systemd[1]: Listening on docker.socket. May 13 00:35:37.535764 systemd[1]: Reached target sockets.target. May 13 00:35:37.536542 systemd[1]: Reached target basic.target. May 13 00:35:37.537348 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:35:37.537378 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:35:37.538380 systemd[1]: Starting containerd.service... May 13 00:35:37.540211 systemd[1]: Starting dbus.service... May 13 00:35:37.541955 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:35:37.543914 systemd[1]: Starting extend-filesystems.service... May 13 00:35:37.544862 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:35:37.546122 systemd[1]: Starting motdgen.service... May 13 00:35:37.547880 systemd[1]: Starting prepare-helm.service... May 13 00:35:37.549822 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:35:37.551736 jq[1191]: false May 13 00:35:37.551921 systemd[1]: Starting sshd-keygen.service... May 13 00:35:37.556488 systemd[1]: Starting systemd-logind.service... May 13 00:35:37.558043 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:35:37.558112 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:35:37.558536 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:35:37.559262 systemd[1]: Starting update-engine.service... May 13 00:35:37.562777 extend-filesystems[1192]: Found loop1 May 13 00:35:37.562777 extend-filesystems[1192]: Found vda May 13 00:35:37.562777 extend-filesystems[1192]: Found vda1 May 13 00:35:37.562777 extend-filesystems[1192]: Found vda2 May 13 00:35:37.562777 extend-filesystems[1192]: Found vda3 May 13 00:35:37.562777 extend-filesystems[1192]: Found usr May 13 00:35:37.562777 extend-filesystems[1192]: Found vda4 May 13 00:35:37.562777 extend-filesystems[1192]: Found vda6 May 13 00:35:37.562777 extend-filesystems[1192]: Found vda7 May 13 00:35:37.562777 extend-filesystems[1192]: Found vda9 May 13 00:35:37.562777 extend-filesystems[1192]: Checking size of /dev/vda9 May 13 00:35:37.623107 extend-filesystems[1192]: Resized partition /dev/vda9 May 13 00:35:37.609105 dbus-daemon[1190]: [system] SELinux support is enabled May 13 00:35:37.562838 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:35:37.624581 extend-filesystems[1227]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:35:37.565537 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:35:37.628412 jq[1209]: true May 13 00:35:37.565760 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:35:37.567014 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:35:37.628862 tar[1212]: linux-arm64/helm May 13 00:35:37.567345 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:35:37.629126 jq[1215]: true May 13 00:35:37.604880 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:35:37.605048 systemd[1]: Finished motdgen.service. May 13 00:35:37.609463 systemd[1]: Started dbus.service. May 13 00:35:37.613398 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:35:37.613421 systemd[1]: Reached target system-config.target. May 13 00:35:37.614463 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:35:37.614478 systemd[1]: Reached target user-config.target. May 13 00:35:37.633795 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:35:37.639783 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:35:37.639970 systemd-logind[1203]: New seat seat0. May 13 00:35:37.643961 systemd[1]: Started systemd-logind.service. May 13 00:35:37.705593 update_engine[1204]: I0513 00:35:37.701535 1204 main.cc:92] Flatcar Update Engine starting May 13 00:35:37.705886 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:35:37.709916 systemd[1]: Started update-engine.service. May 13 00:35:37.713389 systemd[1]: Started locksmithd.service. May 13 00:35:37.724903 update_engine[1204]: I0513 00:35:37.716913 1204 update_check_scheduler.cc:74] Next update check in 2m7s May 13 00:35:37.724939 extend-filesystems[1227]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:35:37.724939 extend-filesystems[1227]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:35:37.724939 extend-filesystems[1227]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:35:37.725923 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:35:37.730008 extend-filesystems[1192]: Resized filesystem in /dev/vda9 May 13 00:35:37.726159 systemd[1]: Finished extend-filesystems.service. May 13 00:35:37.734091 bash[1242]: Updated "/home/core/.ssh/authorized_keys" May 13 00:35:37.735124 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:35:37.745971 env[1216]: time="2025-05-13T00:35:37.745907080Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:35:37.774305 env[1216]: time="2025-05-13T00:35:37.774246800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:35:37.774589 env[1216]: time="2025-05-13T00:35:37.774567760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:35:37.776954 env[1216]: time="2025-05-13T00:35:37.775861400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:35:37.776954 env[1216]: time="2025-05-13T00:35:37.775892600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:35:37.776954 env[1216]: time="2025-05-13T00:35:37.776099080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:35:37.776954 env[1216]: time="2025-05-13T00:35:37.776116160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:35:37.776954 env[1216]: time="2025-05-13T00:35:37.776128920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:35:37.776954 env[1216]: time="2025-05-13T00:35:37.776138520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:35:37.776954 env[1216]: time="2025-05-13T00:35:37.776209280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:35:37.776954 env[1216]: time="2025-05-13T00:35:37.776554080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:35:37.776954 env[1216]: time="2025-05-13T00:35:37.776739960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:35:37.776954 env[1216]: time="2025-05-13T00:35:37.776757120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:35:37.777288 env[1216]: time="2025-05-13T00:35:37.776816120Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:35:37.777288 env[1216]: time="2025-05-13T00:35:37.776828480Z" level=info msg="metadata content store policy set" policy=shared May 13 00:35:37.781727 env[1216]: time="2025-05-13T00:35:37.781155240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:35:37.781727 env[1216]: time="2025-05-13T00:35:37.781204840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:35:37.781727 env[1216]: time="2025-05-13T00:35:37.781220560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:35:37.781727 env[1216]: time="2025-05-13T00:35:37.781258680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:35:37.781727 env[1216]: time="2025-05-13T00:35:37.781276760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:35:37.781727 env[1216]: time="2025-05-13T00:35:37.781290920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:35:37.781727 env[1216]: time="2025-05-13T00:35:37.781303360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:35:37.781727 env[1216]: time="2025-05-13T00:35:37.781687680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:35:37.781727 env[1216]: time="2025-05-13T00:35:37.781752360Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:35:37.781727 env[1216]: time="2025-05-13T00:35:37.781770080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:35:37.781727 env[1216]: time="2025-05-13T00:35:37.781784440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:35:37.782062 env[1216]: time="2025-05-13T00:35:37.781797000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:35:37.782062 env[1216]: time="2025-05-13T00:35:37.781919600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:35:37.782062 env[1216]: time="2025-05-13T00:35:37.781993000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:35:37.782238 env[1216]: time="2025-05-13T00:35:37.782218640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:35:37.782279 env[1216]: time="2025-05-13T00:35:37.782260680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782304 env[1216]: time="2025-05-13T00:35:37.782277520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:35:37.782441 env[1216]: time="2025-05-13T00:35:37.782425200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782609 env[1216]: time="2025-05-13T00:35:37.782479200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782609 env[1216]: time="2025-05-13T00:35:37.782498080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782609 env[1216]: time="2025-05-13T00:35:37.782509480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782609 env[1216]: time="2025-05-13T00:35:37.782521240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782609 env[1216]: time="2025-05-13T00:35:37.782533640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782609 env[1216]: time="2025-05-13T00:35:37.782547120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782609 env[1216]: time="2025-05-13T00:35:37.782559080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782609 env[1216]: time="2025-05-13T00:35:37.782572800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:35:37.782824 env[1216]: time="2025-05-13T00:35:37.782735560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782824 env[1216]: time="2025-05-13T00:35:37.782753600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782824 env[1216]: time="2025-05-13T00:35:37.782767840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:35:37.782824 env[1216]: time="2025-05-13T00:35:37.782779400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:35:37.782824 env[1216]: time="2025-05-13T00:35:37.782792880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:35:37.782824 env[1216]: time="2025-05-13T00:35:37.782803360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:35:37.782824 env[1216]: time="2025-05-13T00:35:37.782819880Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:35:37.782962 env[1216]: time="2025-05-13T00:35:37.782854440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:35:37.783113 env[1216]: time="2025-05-13T00:35:37.783058120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:35:37.784100 env[1216]: time="2025-05-13T00:35:37.783153120Z" level=info msg="Connect containerd service" May 13 00:35:37.784100 env[1216]: time="2025-05-13T00:35:37.783186280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:35:37.784100 env[1216]: time="2025-05-13T00:35:37.783940800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:35:37.784670 env[1216]: time="2025-05-13T00:35:37.784344320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:35:37.784670 env[1216]: time="2025-05-13T00:35:37.784351920Z" level=info msg="Start subscribing containerd event" May 13 00:35:37.784670 env[1216]: time="2025-05-13T00:35:37.784393280Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:35:37.784670 env[1216]: time="2025-05-13T00:35:37.784427680Z" level=info msg="Start recovering state" May 13 00:35:37.784670 env[1216]: time="2025-05-13T00:35:37.784472200Z" level=info msg="containerd successfully booted in 0.039736s" May 13 00:35:37.784670 env[1216]: time="2025-05-13T00:35:37.784497800Z" level=info msg="Start event monitor" May 13 00:35:37.784670 env[1216]: time="2025-05-13T00:35:37.784519640Z" level=info msg="Start snapshots syncer" May 13 00:35:37.784670 env[1216]: time="2025-05-13T00:35:37.784529080Z" level=info msg="Start cni network conf syncer for default" May 13 00:35:37.784670 env[1216]: time="2025-05-13T00:35:37.784536440Z" level=info msg="Start streaming server" May 13 00:35:37.784563 systemd[1]: Started containerd.service. May 13 00:35:37.810380 locksmithd[1243]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:35:38.048893 tar[1212]: linux-arm64/LICENSE May 13 00:35:38.048893 tar[1212]: linux-arm64/README.md May 13 00:35:38.053037 systemd[1]: Finished prepare-helm.service. May 13 00:35:38.588881 systemd-networkd[1043]: eth0: Gained IPv6LL May 13 00:35:38.590605 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:35:38.591877 systemd[1]: Reached target network-online.target. May 13 00:35:38.594357 systemd[1]: Starting kubelet.service... May 13 00:35:39.124997 systemd[1]: Started kubelet.service. May 13 00:35:39.271069 sshd_keygen[1213]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:35:39.288098 systemd[1]: Finished sshd-keygen.service. May 13 00:35:39.290587 systemd[1]: Starting issuegen.service... May 13 00:35:39.295275 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:35:39.295434 systemd[1]: Finished issuegen.service. May 13 00:35:39.297690 systemd[1]: Starting systemd-user-sessions.service... May 13 00:35:39.303532 systemd[1]: Finished systemd-user-sessions.service. May 13 00:35:39.306011 systemd[1]: Started getty@tty1.service. May 13 00:35:39.308395 systemd[1]: Started serial-getty@ttyAMA0.service. May 13 00:35:39.309573 systemd[1]: Reached target getty.target. May 13 00:35:39.310442 systemd[1]: Reached target multi-user.target. May 13 00:35:39.312657 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:35:39.318969 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:35:39.319109 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:35:39.320188 systemd[1]: Startup finished in 615ms (kernel) + 4.553s (initrd) + 5.193s (userspace) = 10.362s. May 13 00:35:39.660838 kubelet[1260]: E0513 00:35:39.660785 1260 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:35:39.662627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:35:39.662768 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:35:42.308681 systemd[1]: Created slice system-sshd.slice. May 13 00:35:42.309790 systemd[1]: Started sshd@0-10.0.0.111:22-10.0.0.1:47216.service. May 13 00:35:42.356494 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 47216 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:35:42.358472 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:42.366869 systemd-logind[1203]: New session 1 of user core. May 13 00:35:42.367755 systemd[1]: Created slice user-500.slice. May 13 00:35:42.368831 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:35:42.376566 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:35:42.377784 systemd[1]: Starting user@500.service... May 13 00:35:42.380329 (systemd)[1286]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:42.436876 systemd[1286]: Queued start job for default target default.target. May 13 00:35:42.437400 systemd[1286]: Reached target paths.target. May 13 00:35:42.437431 systemd[1286]: Reached target sockets.target. May 13 00:35:42.437443 systemd[1286]: Reached target timers.target. May 13 00:35:42.437453 systemd[1286]: Reached target basic.target. May 13 00:35:42.437500 systemd[1286]: Reached target default.target. May 13 00:35:42.437523 systemd[1286]: Startup finished in 51ms. May 13 00:35:42.437596 systemd[1]: Started user@500.service. May 13 00:35:42.438876 systemd[1]: Started session-1.scope. May 13 00:35:42.489734 systemd[1]: Started sshd@1-10.0.0.111:22-10.0.0.1:47230.service. May 13 00:35:42.532954 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 47230 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:35:42.534892 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:42.539374 systemd[1]: Started session-2.scope. May 13 00:35:42.539521 systemd-logind[1203]: New session 2 of user core. May 13 00:35:42.592401 sshd[1295]: pam_unix(sshd:session): session closed for user core May 13 00:35:42.595075 systemd[1]: sshd@1-10.0.0.111:22-10.0.0.1:47230.service: Deactivated successfully. May 13 00:35:42.595661 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:35:42.596161 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. May 13 00:35:42.597166 systemd[1]: Started sshd@2-10.0.0.111:22-10.0.0.1:49566.service. May 13 00:35:42.597845 systemd-logind[1203]: Removed session 2. May 13 00:35:42.635018 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 49566 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:35:42.636445 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:42.640063 systemd-logind[1203]: New session 3 of user core. May 13 00:35:42.640882 systemd[1]: Started session-3.scope. May 13 00:35:42.689069 sshd[1301]: pam_unix(sshd:session): session closed for user core May 13 00:35:42.691970 systemd[1]: sshd@2-10.0.0.111:22-10.0.0.1:49566.service: Deactivated successfully. May 13 00:35:42.692706 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:35:42.693387 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. May 13 00:35:42.695204 systemd[1]: Started sshd@3-10.0.0.111:22-10.0.0.1:49574.service. May 13 00:35:42.696047 systemd-logind[1203]: Removed session 3. May 13 00:35:42.732053 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 49574 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:35:42.733249 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:42.736534 systemd-logind[1203]: New session 4 of user core. May 13 00:35:42.738129 systemd[1]: Started session-4.scope. May 13 00:35:42.791815 sshd[1307]: pam_unix(sshd:session): session closed for user core May 13 00:35:42.797034 systemd[1]: sshd@3-10.0.0.111:22-10.0.0.1:49574.service: Deactivated successfully. May 13 00:35:42.798174 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:35:42.799051 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. May 13 00:35:42.801670 systemd[1]: Started sshd@4-10.0.0.111:22-10.0.0.1:49588.service. May 13 00:35:42.803015 systemd-logind[1203]: Removed session 4. May 13 00:35:42.840091 sshd[1313]: Accepted publickey for core from 10.0.0.1 port 49588 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:35:42.841360 sshd[1313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:35:42.845376 systemd-logind[1203]: New session 5 of user core. May 13 00:35:42.846640 systemd[1]: Started session-5.scope. May 13 00:35:42.908149 sudo[1316]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:35:42.908667 sudo[1316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:35:42.978728 systemd[1]: Starting docker.service... May 13 00:35:43.055036 env[1328]: time="2025-05-13T00:35:43.054983780Z" level=info msg="Starting up" May 13 00:35:43.056353 env[1328]: time="2025-05-13T00:35:43.056322822Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:35:43.056456 env[1328]: time="2025-05-13T00:35:43.056441840Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:35:43.056524 env[1328]: time="2025-05-13T00:35:43.056508585Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:35:43.056582 env[1328]: time="2025-05-13T00:35:43.056568409Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:35:43.058914 env[1328]: time="2025-05-13T00:35:43.058888026Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:35:43.058914 env[1328]: time="2025-05-13T00:35:43.058911035Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:35:43.059012 env[1328]: time="2025-05-13T00:35:43.058925470Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:35:43.059012 env[1328]: time="2025-05-13T00:35:43.058935302Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:35:43.172766 env[1328]: time="2025-05-13T00:35:43.172669242Z" level=info msg="Loading containers: start." May 13 00:35:43.301023 kernel: Initializing XFRM netlink socket May 13 00:35:43.326962 env[1328]: time="2025-05-13T00:35:43.326927106Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:35:43.389272 systemd-networkd[1043]: docker0: Link UP May 13 00:35:43.408934 env[1328]: time="2025-05-13T00:35:43.408900662Z" level=info msg="Loading containers: done." May 13 00:35:43.436737 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck379799027-merged.mount: Deactivated successfully. May 13 00:35:43.440278 env[1328]: time="2025-05-13T00:35:43.440237484Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:35:43.440451 env[1328]: time="2025-05-13T00:35:43.440422972Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:35:43.440533 env[1328]: time="2025-05-13T00:35:43.440519649Z" level=info msg="Daemon has completed initialization" May 13 00:35:43.459816 systemd[1]: Started docker.service. May 13 00:35:43.463713 env[1328]: time="2025-05-13T00:35:43.463588939Z" level=info msg="API listen on /run/docker.sock" May 13 00:35:44.259410 env[1216]: time="2025-05-13T00:35:44.259113278Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:35:44.953435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262786246.mount: Deactivated successfully. May 13 00:35:46.237262 env[1216]: time="2025-05-13T00:35:46.237207352Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:46.238981 env[1216]: time="2025-05-13T00:35:46.238946580Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:46.241145 env[1216]: time="2025-05-13T00:35:46.241121713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:46.242548 env[1216]: time="2025-05-13T00:35:46.242522138Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:46.243498 env[1216]: time="2025-05-13T00:35:46.243450230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 00:35:46.254133 env[1216]: time="2025-05-13T00:35:46.254096888Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:35:47.885407 env[1216]: time="2025-05-13T00:35:47.885341040Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:47.886931 env[1216]: time="2025-05-13T00:35:47.886895542Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:47.888513 env[1216]: time="2025-05-13T00:35:47.888488384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:47.890757 env[1216]: time="2025-05-13T00:35:47.890601044Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:47.893016 env[1216]: time="2025-05-13T00:35:47.892979712Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 00:35:47.901837 env[1216]: time="2025-05-13T00:35:47.901805996Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:35:49.136779 env[1216]: time="2025-05-13T00:35:49.136706141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:49.139030 env[1216]: time="2025-05-13T00:35:49.138995204Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:49.142665 env[1216]: time="2025-05-13T00:35:49.142627913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:49.144781 env[1216]: time="2025-05-13T00:35:49.144743089Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:49.145573 env[1216]: time="2025-05-13T00:35:49.145541933Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 00:35:49.159520 env[1216]: time="2025-05-13T00:35:49.159466391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:35:49.913474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:35:49.913648 systemd[1]: Stopped kubelet.service. May 13 00:35:49.915056 systemd[1]: Starting kubelet.service... May 13 00:35:50.008613 systemd[1]: Started kubelet.service. May 13 00:35:50.083230 kubelet[1489]: E0513 00:35:50.083170 1489 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:35:50.086099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:35:50.086236 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:35:50.419775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669620707.mount: Deactivated successfully. May 13 00:35:50.849158 env[1216]: time="2025-05-13T00:35:50.848934573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:50.855290 env[1216]: time="2025-05-13T00:35:50.855239007Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:50.857083 env[1216]: time="2025-05-13T00:35:50.857054852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:50.858284 env[1216]: time="2025-05-13T00:35:50.858244474Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:50.858869 env[1216]: time="2025-05-13T00:35:50.858839225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 00:35:50.869326 env[1216]: time="2025-05-13T00:35:50.869261537Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:35:51.427423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482329288.mount: Deactivated successfully. May 13 00:35:52.356149 env[1216]: time="2025-05-13T00:35:52.356101329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:52.357864 env[1216]: time="2025-05-13T00:35:52.357834816Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:52.359832 env[1216]: time="2025-05-13T00:35:52.359799335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:52.364324 env[1216]: time="2025-05-13T00:35:52.364297461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:52.365111 env[1216]: time="2025-05-13T00:35:52.365072388Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 00:35:52.381163 env[1216]: time="2025-05-13T00:35:52.381125961Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:35:52.804586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011474163.mount: Deactivated successfully. May 13 00:35:52.808952 env[1216]: time="2025-05-13T00:35:52.808891852Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:52.810282 env[1216]: time="2025-05-13T00:35:52.810245178Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:52.812188 env[1216]: time="2025-05-13T00:35:52.812155770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:52.814011 env[1216]: time="2025-05-13T00:35:52.813964477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:52.815548 env[1216]: time="2025-05-13T00:35:52.815504898Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 00:35:52.824570 env[1216]: time="2025-05-13T00:35:52.824523516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:35:53.305402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2278343075.mount: Deactivated successfully. May 13 00:35:55.301343 env[1216]: time="2025-05-13T00:35:55.301292030Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:55.302809 env[1216]: time="2025-05-13T00:35:55.302782835Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:55.305886 env[1216]: time="2025-05-13T00:35:55.305847126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:55.307330 env[1216]: time="2025-05-13T00:35:55.307304963Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:35:55.308156 env[1216]: time="2025-05-13T00:35:55.308118176Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 00:36:00.337031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:36:00.337206 systemd[1]: Stopped kubelet.service. May 13 00:36:00.338613 systemd[1]: Starting kubelet.service... May 13 00:36:00.379499 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:36:00.379568 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:36:00.379785 systemd[1]: Stopped kubelet.service. May 13 00:36:00.382124 systemd[1]: Starting kubelet.service... May 13 00:36:00.399366 systemd[1]: Reloading. May 13 00:36:00.446935 /usr/lib/systemd/system-generators/torcx-generator[1623]: time="2025-05-13T00:36:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:36:00.446962 /usr/lib/systemd/system-generators/torcx-generator[1623]: time="2025-05-13T00:36:00Z" level=info msg="torcx already run" May 13 00:36:00.557266 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:36:00.557285 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:36:00.573804 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:36:00.645302 systemd[1]: Started kubelet.service. May 13 00:36:00.647886 systemd[1]: Stopping kubelet.service... May 13 00:36:00.648221 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:36:00.648426 systemd[1]: Stopped kubelet.service. May 13 00:36:00.650565 systemd[1]: Starting kubelet.service... May 13 00:36:00.771058 systemd[1]: Started kubelet.service. May 13 00:36:00.817537 kubelet[1668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:36:00.817537 kubelet[1668]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:36:00.817537 kubelet[1668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:36:00.818060 kubelet[1668]: I0513 00:36:00.817960 1668 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:36:02.237947 kubelet[1668]: I0513 00:36:02.237907 1668 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:36:02.238326 kubelet[1668]: I0513 00:36:02.238309 1668 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:36:02.238635 kubelet[1668]: I0513 00:36:02.238616 1668 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:36:02.270062 kubelet[1668]: E0513 00:36:02.270022 1668 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:02.271123 kubelet[1668]: I0513 00:36:02.271082 1668 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:36:02.279704 kubelet[1668]: I0513 00:36:02.279659 1668 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:36:02.280159 kubelet[1668]: I0513 00:36:02.280126 1668 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:36:02.280409 kubelet[1668]: I0513 00:36:02.280228 1668 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:36:02.280644 kubelet[1668]: I0513 00:36:02.280631 1668 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:36:02.280728 kubelet[1668]: I0513 00:36:02.280718 1668 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:36:02.281072 kubelet[1668]: I0513 00:36:02.281056 1668 state_mem.go:36] "Initialized new in-memory state store" May 13 00:36:02.282208 kubelet[1668]: I0513 00:36:02.282185 1668 kubelet.go:400] "Attempting to sync node with API server" May 13 00:36:02.282318 kubelet[1668]: I0513 00:36:02.282306 1668 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:36:02.282613 kubelet[1668]: I0513 00:36:02.282602 1668 kubelet.go:312] "Adding apiserver pod source" May 13 00:36:02.282754 kubelet[1668]: I0513 00:36:02.282743 1668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:36:02.283979 kubelet[1668]: W0513 00:36:02.283921 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:02.283979 kubelet[1668]: I0513 00:36:02.283974 1668 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:36:02.284087 kubelet[1668]: E0513 00:36:02.283988 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:02.284194 kubelet[1668]: W0513 00:36:02.284155 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:02.284265 kubelet[1668]: E0513 00:36:02.284254 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:02.284358 kubelet[1668]: I0513 00:36:02.284342 1668 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:36:02.284456 kubelet[1668]: W0513 00:36:02.284445 1668 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:36:02.285241 kubelet[1668]: I0513 00:36:02.285227 1668 server.go:1264] "Started kubelet" May 13 00:36:02.288698 kubelet[1668]: I0513 00:36:02.288649 1668 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:36:02.289663 kubelet[1668]: I0513 00:36:02.289605 1668 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:36:02.289873 kubelet[1668]: I0513 00:36:02.289840 1668 server.go:455] "Adding debug handlers to kubelet server" May 13 00:36:02.290009 kubelet[1668]: I0513 00:36:02.289983 1668 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:36:02.299554 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:36:02.300135 kubelet[1668]: I0513 00:36:02.300106 1668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:36:02.301334 kubelet[1668]: E0513 00:36:02.301038 1668 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eef111f50c79e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:36:02.28520131 +0000 UTC m=+1.509621149,LastTimestamp:2025-05-13 00:36:02.28520131 +0000 UTC m=+1.509621149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:36:02.302404 kubelet[1668]: I0513 00:36:02.302382 1668 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:36:02.302776 kubelet[1668]: I0513 00:36:02.302752 1668 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:36:02.303100 kubelet[1668]: I0513 00:36:02.303083 1668 reconciler.go:26] "Reconciler: start to sync state" May 13 00:36:02.303909 kubelet[1668]: W0513 00:36:02.303837 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:02.303981 kubelet[1668]: E0513 00:36:02.303925 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:02.304604 kubelet[1668]: E0513 00:36:02.304566 1668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="200ms" May 13 00:36:02.305172 kubelet[1668]: I0513 00:36:02.304861 1668 factory.go:221] Registration of the systemd container factory successfully May 13 00:36:02.305172 kubelet[1668]: I0513 00:36:02.305004 1668 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:36:02.305851 kubelet[1668]: E0513 00:36:02.305820 1668 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:36:02.306464 kubelet[1668]: I0513 00:36:02.306437 1668 factory.go:221] Registration of the containerd container factory successfully May 13 00:36:02.320326 kubelet[1668]: I0513 00:36:02.320299 1668 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:36:02.320326 kubelet[1668]: I0513 00:36:02.320318 1668 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:36:02.320475 kubelet[1668]: I0513 00:36:02.320338 1668 state_mem.go:36] "Initialized new in-memory state store" May 13 00:36:02.403556 kubelet[1668]: I0513 00:36:02.403521 1668 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:36:02.404081 kubelet[1668]: E0513 00:36:02.404051 1668 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 13 00:36:02.496507 kubelet[1668]: I0513 00:36:02.496409 1668 policy_none.go:49] "None policy: Start" May 13 00:36:02.497609 kubelet[1668]: I0513 00:36:02.497554 1668 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:36:02.497701 kubelet[1668]: I0513 00:36:02.497584 1668 state_mem.go:35] "Initializing new in-memory state store" May 13 00:36:02.502734 kubelet[1668]: I0513 00:36:02.502686 1668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:36:02.504076 systemd[1]: Created slice kubepods.slice. May 13 00:36:02.504381 kubelet[1668]: I0513 00:36:02.504352 1668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:36:02.504431 kubelet[1668]: I0513 00:36:02.504385 1668 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:36:02.504431 kubelet[1668]: I0513 00:36:02.504406 1668 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:36:02.504484 kubelet[1668]: E0513 00:36:02.504458 1668 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:36:02.505494 kubelet[1668]: E0513 00:36:02.505459 1668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="400ms" May 13 00:36:02.505603 kubelet[1668]: W0513 00:36:02.505575 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:02.505658 kubelet[1668]: E0513 00:36:02.505619 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:02.510399 systemd[1]: Created slice kubepods-burstable.slice. May 13 00:36:02.512970 systemd[1]: Created slice kubepods-besteffort.slice. May 13 00:36:02.525603 kubelet[1668]: I0513 00:36:02.525563 1668 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:36:02.525856 kubelet[1668]: I0513 00:36:02.525811 1668 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:36:02.525969 kubelet[1668]: I0513 00:36:02.525950 1668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:36:02.527227 kubelet[1668]: E0513 00:36:02.527201 1668 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:36:02.604589 kubelet[1668]: I0513 00:36:02.604539 1668 topology_manager.go:215] "Topology Admit Handler" podUID="51aaf9d643569475cc4c775e4b7d1270" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:36:02.605311 kubelet[1668]: I0513 00:36:02.605290 1668 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:36:02.605512 kubelet[1668]: I0513 00:36:02.605399 1668 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:36:02.605830 kubelet[1668]: E0513 00:36:02.605799 1668 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 13 00:36:02.606297 kubelet[1668]: I0513 00:36:02.606272 1668 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:36:02.611498 systemd[1]: Created slice kubepods-burstable-pod51aaf9d643569475cc4c775e4b7d1270.slice. May 13 00:36:02.637027 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 13 00:36:02.653958 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 13 00:36:02.704954 kubelet[1668]: I0513 00:36:02.704904 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:02.704954 kubelet[1668]: I0513 00:36:02.704948 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:02.705124 kubelet[1668]: I0513 00:36:02.704970 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:36:02.705124 kubelet[1668]: I0513 00:36:02.704986 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51aaf9d643569475cc4c775e4b7d1270-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"51aaf9d643569475cc4c775e4b7d1270\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:02.705124 kubelet[1668]: I0513 00:36:02.705003 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51aaf9d643569475cc4c775e4b7d1270-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"51aaf9d643569475cc4c775e4b7d1270\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:02.705124 kubelet[1668]: I0513 00:36:02.705020 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51aaf9d643569475cc4c775e4b7d1270-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"51aaf9d643569475cc4c775e4b7d1270\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:02.705124 kubelet[1668]: I0513 00:36:02.705036 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:02.705236 kubelet[1668]: I0513 00:36:02.705051 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:02.705236 kubelet[1668]: I0513 00:36:02.705066 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:02.906306 kubelet[1668]: E0513 00:36:02.906175 1668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="800ms" May 13 00:36:02.935461 kubelet[1668]: E0513 00:36:02.935418 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:02.936121 env[1216]: time="2025-05-13T00:36:02.936085519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:51aaf9d643569475cc4c775e4b7d1270,Namespace:kube-system,Attempt:0,}" May 13 00:36:02.952755 kubelet[1668]: E0513 00:36:02.952715 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:02.953272 env[1216]: time="2025-05-13T00:36:02.953236060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:36:02.956464 kubelet[1668]: E0513 00:36:02.956434 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:02.956903 env[1216]: time="2025-05-13T00:36:02.956866175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:36:03.007352 kubelet[1668]: I0513 00:36:03.007322 1668 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:36:03.007857 kubelet[1668]: E0513 00:36:03.007830 1668 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 13 00:36:03.279958 kubelet[1668]: W0513 00:36:03.279904 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:03.279958 kubelet[1668]: E0513 00:36:03.279955 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:03.553940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699398817.mount: Deactivated successfully. May 13 00:36:03.561489 env[1216]: time="2025-05-13T00:36:03.561441030Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.564702 env[1216]: time="2025-05-13T00:36:03.564642723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.565633 env[1216]: time="2025-05-13T00:36:03.565604010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.567902 env[1216]: time="2025-05-13T00:36:03.567867309Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.569243 env[1216]: time="2025-05-13T00:36:03.569210394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.571731 env[1216]: time="2025-05-13T00:36:03.571676059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.575024 env[1216]: time="2025-05-13T00:36:03.574990261Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.578216 env[1216]: time="2025-05-13T00:36:03.578180607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.579068 env[1216]: time="2025-05-13T00:36:03.579040412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.579898 env[1216]: time="2025-05-13T00:36:03.579870690Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.580657 env[1216]: time="2025-05-13T00:36:03.580631529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.581485 env[1216]: time="2025-05-13T00:36:03.581458172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:03.617508 env[1216]: time="2025-05-13T00:36:03.617427683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:03.617726 env[1216]: time="2025-05-13T00:36:03.617469994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:03.617726 env[1216]: time="2025-05-13T00:36:03.617513703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:03.617726 env[1216]: time="2025-05-13T00:36:03.617529085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:03.617889 env[1216]: time="2025-05-13T00:36:03.617860102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:03.617971 env[1216]: time="2025-05-13T00:36:03.617936534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a859386020d87dcc87bfaa331321cca70153b46ad9f566c972825d028118279 pid=1717 runtime=io.containerd.runc.v2 May 13 00:36:03.618076 env[1216]: time="2025-05-13T00:36:03.618053758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:03.619137 env[1216]: time="2025-05-13T00:36:03.619051922Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c918ea0227e797344f83a90e1656849885e2a5af709545ef23e153e5e74d654 pid=1718 runtime=io.containerd.runc.v2 May 13 00:36:03.623108 env[1216]: time="2025-05-13T00:36:03.623016292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:03.623285 env[1216]: time="2025-05-13T00:36:03.623081177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:03.623285 env[1216]: time="2025-05-13T00:36:03.623092044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:03.623528 env[1216]: time="2025-05-13T00:36:03.623491501Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58955ecb598fb4020612fca45bd6bd8f0570201cf47bf467167d787c2b030925 pid=1741 runtime=io.containerd.runc.v2 May 13 00:36:03.634784 systemd[1]: Started cri-containerd-4a859386020d87dcc87bfaa331321cca70153b46ad9f566c972825d028118279.scope. May 13 00:36:03.643345 systemd[1]: Started cri-containerd-58955ecb598fb4020612fca45bd6bd8f0570201cf47bf467167d787c2b030925.scope. May 13 00:36:03.644427 systemd[1]: Started cri-containerd-9c918ea0227e797344f83a90e1656849885e2a5af709545ef23e153e5e74d654.scope. May 13 00:36:03.677312 kubelet[1668]: W0513 00:36:03.677215 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:03.677312 kubelet[1668]: E0513 00:36:03.677280 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:03.696969 kubelet[1668]: W0513 00:36:03.696855 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:03.696969 kubelet[1668]: E0513 00:36:03.696924 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:03.706601 kubelet[1668]: E0513 00:36:03.706532 1668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="1.6s" May 13 00:36:03.714116 env[1216]: time="2025-05-13T00:36:03.714066544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a859386020d87dcc87bfaa331321cca70153b46ad9f566c972825d028118279\"" May 13 00:36:03.715645 kubelet[1668]: E0513 00:36:03.715374 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:03.718509 env[1216]: time="2025-05-13T00:36:03.718459298Z" level=info msg="CreateContainer within sandbox \"4a859386020d87dcc87bfaa331321cca70153b46ad9f566c972825d028118279\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:36:03.718945 env[1216]: time="2025-05-13T00:36:03.718911334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:51aaf9d643569475cc4c775e4b7d1270,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c918ea0227e797344f83a90e1656849885e2a5af709545ef23e153e5e74d654\"" May 13 00:36:03.720399 kubelet[1668]: E0513 00:36:03.720040 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:03.722784 env[1216]: time="2025-05-13T00:36:03.722138158Z" level=info msg="CreateContainer within sandbox \"9c918ea0227e797344f83a90e1656849885e2a5af709545ef23e153e5e74d654\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:36:03.733508 env[1216]: time="2025-05-13T00:36:03.733466041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"58955ecb598fb4020612fca45bd6bd8f0570201cf47bf467167d787c2b030925\"" May 13 00:36:03.734619 kubelet[1668]: E0513 00:36:03.734381 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:03.736684 env[1216]: time="2025-05-13T00:36:03.736625783Z" level=info msg="CreateContainer within sandbox \"58955ecb598fb4020612fca45bd6bd8f0570201cf47bf467167d787c2b030925\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:36:03.742622 env[1216]: time="2025-05-13T00:36:03.742571698Z" level=info msg="CreateContainer within sandbox \"4a859386020d87dcc87bfaa331321cca70153b46ad9f566c972825d028118279\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4974e4e9ddf48e40aa54b07b0a18a066b6c84911f1a985aa59cb5bf8c04580ff\"" May 13 00:36:03.743743 env[1216]: time="2025-05-13T00:36:03.743691122Z" level=info msg="StartContainer for \"4974e4e9ddf48e40aa54b07b0a18a066b6c84911f1a985aa59cb5bf8c04580ff\"" May 13 00:36:03.745569 env[1216]: time="2025-05-13T00:36:03.745525358Z" level=info msg="CreateContainer within sandbox \"9c918ea0227e797344f83a90e1656849885e2a5af709545ef23e153e5e74d654\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"198690d4a5c07b4a358357cda6d3a516895309961d7511a499e591bd6f3fb27b\"" May 13 00:36:03.746057 env[1216]: time="2025-05-13T00:36:03.746029134Z" level=info msg="StartContainer for \"198690d4a5c07b4a358357cda6d3a516895309961d7511a499e591bd6f3fb27b\"" May 13 00:36:03.754358 env[1216]: time="2025-05-13T00:36:03.754292886Z" level=info msg="CreateContainer within sandbox \"58955ecb598fb4020612fca45bd6bd8f0570201cf47bf467167d787c2b030925\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"062b2419b32377d3141f0d68fd4b9c337da36df0847efe6064caf5acffe41666\"" May 13 00:36:03.754919 env[1216]: time="2025-05-13T00:36:03.754883842Z" level=info msg="StartContainer for \"062b2419b32377d3141f0d68fd4b9c337da36df0847efe6064caf5acffe41666\"" May 13 00:36:03.764605 systemd[1]: Started cri-containerd-198690d4a5c07b4a358357cda6d3a516895309961d7511a499e591bd6f3fb27b.scope. May 13 00:36:03.768726 systemd[1]: Started cri-containerd-4974e4e9ddf48e40aa54b07b0a18a066b6c84911f1a985aa59cb5bf8c04580ff.scope. May 13 00:36:03.779989 systemd[1]: Started cri-containerd-062b2419b32377d3141f0d68fd4b9c337da36df0847efe6064caf5acffe41666.scope. May 13 00:36:03.810783 kubelet[1668]: I0513 00:36:03.810617 1668 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:36:03.811119 kubelet[1668]: E0513 00:36:03.811089 1668 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" May 13 00:36:03.821846 kubelet[1668]: W0513 00:36:03.821808 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:03.821846 kubelet[1668]: E0513 00:36:03.821852 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused May 13 00:36:03.854681 env[1216]: time="2025-05-13T00:36:03.852951848Z" level=info msg="StartContainer for \"4974e4e9ddf48e40aa54b07b0a18a066b6c84911f1a985aa59cb5bf8c04580ff\" returns successfully" May 13 00:36:03.888860 env[1216]: time="2025-05-13T00:36:03.883498678Z" level=info msg="StartContainer for \"198690d4a5c07b4a358357cda6d3a516895309961d7511a499e591bd6f3fb27b\" returns successfully" May 13 00:36:03.895738 env[1216]: time="2025-05-13T00:36:03.892889604Z" level=info msg="StartContainer for \"062b2419b32377d3141f0d68fd4b9c337da36df0847efe6064caf5acffe41666\" returns successfully" May 13 00:36:04.514359 kubelet[1668]: E0513 00:36:04.514324 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:04.516831 kubelet[1668]: E0513 00:36:04.516807 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:04.519482 kubelet[1668]: E0513 00:36:04.519455 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:05.413293 kubelet[1668]: I0513 00:36:05.413242 1668 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:36:05.489088 kubelet[1668]: E0513 00:36:05.489037 1668 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:36:05.521456 kubelet[1668]: E0513 00:36:05.521400 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:05.583580 kubelet[1668]: I0513 00:36:05.583526 1668 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:36:05.591207 kubelet[1668]: E0513 00:36:05.591153 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:05.692269 kubelet[1668]: E0513 00:36:05.692141 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:05.793030 kubelet[1668]: E0513 00:36:05.792975 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:05.893640 kubelet[1668]: E0513 00:36:05.893598 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:05.994322 kubelet[1668]: E0513 00:36:05.994285 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:06.094526 kubelet[1668]: E0513 00:36:06.094490 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:06.195253 kubelet[1668]: E0513 00:36:06.195215 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:06.296139 kubelet[1668]: E0513 00:36:06.296041 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:06.397231 kubelet[1668]: E0513 00:36:06.397164 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:06.497747 kubelet[1668]: E0513 00:36:06.497682 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:06.598919 kubelet[1668]: E0513 00:36:06.598808 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:06.700459 kubelet[1668]: E0513 00:36:06.700408 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:36:06.911651 kubelet[1668]: E0513 00:36:06.911543 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:07.285361 kubelet[1668]: I0513 00:36:07.285314 1668 apiserver.go:52] "Watching apiserver" May 13 00:36:07.303351 kubelet[1668]: I0513 00:36:07.303313 1668 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:36:07.522991 kubelet[1668]: E0513 00:36:07.522953 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:07.815664 systemd[1]: Reloading. May 13 00:36:07.859901 /usr/lib/systemd/system-generators/torcx-generator[1964]: time="2025-05-13T00:36:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:36:07.859931 /usr/lib/systemd/system-generators/torcx-generator[1964]: time="2025-05-13T00:36:07Z" level=info msg="torcx already run" May 13 00:36:07.924150 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:36:07.924171 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:36:07.940563 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:36:08.024316 systemd[1]: Stopping kubelet.service... May 13 00:36:08.040310 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:36:08.040521 systemd[1]: Stopped kubelet.service. May 13 00:36:08.040573 systemd[1]: kubelet.service: Consumed 1.910s CPU time. May 13 00:36:08.042378 systemd[1]: Starting kubelet.service... May 13 00:36:08.130026 systemd[1]: Started kubelet.service. May 13 00:36:08.173704 kubelet[2006]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:36:08.173704 kubelet[2006]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:36:08.173704 kubelet[2006]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:36:08.174062 kubelet[2006]: I0513 00:36:08.173756 2006 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:36:08.178243 kubelet[2006]: I0513 00:36:08.178209 2006 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:36:08.178369 kubelet[2006]: I0513 00:36:08.178347 2006 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:36:08.178677 kubelet[2006]: I0513 00:36:08.178656 2006 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:36:08.180245 kubelet[2006]: I0513 00:36:08.180221 2006 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:36:08.181728 kubelet[2006]: I0513 00:36:08.181672 2006 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:36:08.187715 kubelet[2006]: I0513 00:36:08.187675 2006 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:36:08.187908 kubelet[2006]: I0513 00:36:08.187868 2006 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:36:08.188078 kubelet[2006]: I0513 00:36:08.187901 2006 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:36:08.188171 kubelet[2006]: I0513 00:36:08.188079 2006 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:36:08.188171 kubelet[2006]: I0513 00:36:08.188088 2006 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:36:08.188171 kubelet[2006]: I0513 00:36:08.188121 2006 state_mem.go:36] "Initialized new in-memory state store" May 13 00:36:08.188244 kubelet[2006]: I0513 00:36:08.188215 2006 kubelet.go:400] "Attempting to sync node with API server" May 13 00:36:08.188244 kubelet[2006]: I0513 00:36:08.188228 2006 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:36:08.188287 kubelet[2006]: I0513 00:36:08.188251 2006 kubelet.go:312] "Adding apiserver pod source" May 13 00:36:08.188287 kubelet[2006]: I0513 00:36:08.188266 2006 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:36:08.189089 kubelet[2006]: I0513 00:36:08.189057 2006 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:36:08.189262 kubelet[2006]: I0513 00:36:08.189237 2006 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:36:08.190747 kubelet[2006]: I0513 00:36:08.189842 2006 server.go:1264] "Started kubelet" May 13 00:36:08.190747 kubelet[2006]: I0513 00:36:08.190114 2006 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:36:08.192857 kubelet[2006]: I0513 00:36:08.192832 2006 server.go:455] "Adding debug handlers to kubelet server" May 13 00:36:08.193376 kubelet[2006]: I0513 00:36:08.193315 2006 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:36:08.193637 kubelet[2006]: I0513 00:36:08.193607 2006 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:36:08.202605 kubelet[2006]: I0513 00:36:08.194386 2006 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:36:08.202605 kubelet[2006]: I0513 00:36:08.200234 2006 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:36:08.202605 kubelet[2006]: I0513 00:36:08.200890 2006 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:36:08.202605 kubelet[2006]: I0513 00:36:08.201087 2006 reconciler.go:26] "Reconciler: start to sync state" May 13 00:36:08.215017 kubelet[2006]: I0513 00:36:08.211491 2006 factory.go:221] Registration of the systemd container factory successfully May 13 00:36:08.215017 kubelet[2006]: I0513 00:36:08.211632 2006 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:36:08.226998 kubelet[2006]: I0513 00:36:08.226218 2006 factory.go:221] Registration of the containerd container factory successfully May 13 00:36:08.229386 kubelet[2006]: E0513 00:36:08.229346 2006 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:36:08.234819 kubelet[2006]: I0513 00:36:08.234782 2006 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:36:08.235884 kubelet[2006]: I0513 00:36:08.235863 2006 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:36:08.236003 kubelet[2006]: I0513 00:36:08.235992 2006 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:36:08.236081 kubelet[2006]: I0513 00:36:08.236071 2006 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:36:08.236179 kubelet[2006]: E0513 00:36:08.236161 2006 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:36:08.265834 kubelet[2006]: I0513 00:36:08.265808 2006 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:36:08.265834 kubelet[2006]: I0513 00:36:08.265828 2006 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:36:08.265987 kubelet[2006]: I0513 00:36:08.265851 2006 state_mem.go:36] "Initialized new in-memory state store" May 13 00:36:08.266046 kubelet[2006]: I0513 00:36:08.266029 2006 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:36:08.266079 kubelet[2006]: I0513 00:36:08.266046 2006 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:36:08.266079 kubelet[2006]: I0513 00:36:08.266065 2006 policy_none.go:49] "None policy: Start" May 13 00:36:08.266584 kubelet[2006]: I0513 00:36:08.266570 2006 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:36:08.266633 kubelet[2006]: I0513 00:36:08.266593 2006 state_mem.go:35] "Initializing new in-memory state store" May 13 00:36:08.266746 kubelet[2006]: I0513 00:36:08.266732 2006 state_mem.go:75] "Updated machine memory state" May 13 00:36:08.270509 kubelet[2006]: I0513 00:36:08.270481 2006 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:36:08.271272 kubelet[2006]: I0513 00:36:08.271223 2006 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:36:08.271555 kubelet[2006]: I0513 00:36:08.271479 2006 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:36:08.303660 kubelet[2006]: I0513 00:36:08.303624 2006 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:36:08.309988 kubelet[2006]: I0513 00:36:08.309943 2006 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:36:08.310108 kubelet[2006]: I0513 00:36:08.310038 2006 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:36:08.336951 kubelet[2006]: I0513 00:36:08.336894 2006 topology_manager.go:215] "Topology Admit Handler" podUID="51aaf9d643569475cc4c775e4b7d1270" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:36:08.337079 kubelet[2006]: I0513 00:36:08.337025 2006 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:36:08.337079 kubelet[2006]: I0513 00:36:08.337065 2006 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:36:08.342869 kubelet[2006]: E0513 00:36:08.342822 2006 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:36:08.502969 kubelet[2006]: I0513 00:36:08.502925 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51aaf9d643569475cc4c775e4b7d1270-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"51aaf9d643569475cc4c775e4b7d1270\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:08.503221 kubelet[2006]: I0513 00:36:08.503200 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:08.503337 kubelet[2006]: I0513 00:36:08.503324 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:08.503418 kubelet[2006]: I0513 00:36:08.503406 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:08.503511 kubelet[2006]: I0513 00:36:08.503496 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:08.503613 kubelet[2006]: I0513 00:36:08.503597 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:36:08.503717 kubelet[2006]: I0513 00:36:08.503691 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51aaf9d643569475cc4c775e4b7d1270-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"51aaf9d643569475cc4c775e4b7d1270\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:08.503809 kubelet[2006]: I0513 00:36:08.503797 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:36:08.503896 kubelet[2006]: I0513 00:36:08.503882 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51aaf9d643569475cc4c775e4b7d1270-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"51aaf9d643569475cc4c775e4b7d1270\") " pod="kube-system/kube-apiserver-localhost" May 13 00:36:08.642979 kubelet[2006]: E0513 00:36:08.642948 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:08.643123 kubelet[2006]: E0513 00:36:08.643005 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:08.643353 kubelet[2006]: E0513 00:36:08.643322 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:08.812301 sudo[2040]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:36:08.812922 sudo[2040]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 13 00:36:09.189522 kubelet[2006]: I0513 00:36:09.189415 2006 apiserver.go:52] "Watching apiserver" May 13 00:36:09.201252 kubelet[2006]: I0513 00:36:09.201216 2006 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:36:09.247596 kubelet[2006]: E0513 00:36:09.247559 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:09.248322 kubelet[2006]: E0513 00:36:09.248298 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:09.248800 kubelet[2006]: E0513 00:36:09.248778 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:09.254446 sudo[2040]: pam_unix(sudo:session): session closed for user root May 13 00:36:09.267160 kubelet[2006]: I0513 00:36:09.267086 2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.267068449 podStartE2EDuration="1.267068449s" podCreationTimestamp="2025-05-13 00:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:09.266845045 +0000 UTC m=+1.129962898" watchObservedRunningTime="2025-05-13 00:36:09.267068449 +0000 UTC m=+1.130186302" May 13 00:36:09.273902 kubelet[2006]: I0513 00:36:09.273861 2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.273849647 podStartE2EDuration="1.273849647s" podCreationTimestamp="2025-05-13 00:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:09.273684133 +0000 UTC m=+1.136801986" watchObservedRunningTime="2025-05-13 00:36:09.273849647 +0000 UTC m=+1.136967500" May 13 00:36:10.248863 kubelet[2006]: E0513 00:36:10.248820 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:10.249379 kubelet[2006]: E0513 00:36:10.249343 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:10.909616 sudo[1316]: pam_unix(sudo:session): session closed for user root May 13 00:36:10.911155 sshd[1313]: pam_unix(sshd:session): session closed for user core May 13 00:36:10.913889 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. May 13 00:36:10.914065 systemd[1]: sshd@4-10.0.0.111:22-10.0.0.1:49588.service: Deactivated successfully. May 13 00:36:10.914867 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:36:10.915030 systemd[1]: session-5.scope: Consumed 7.324s CPU time. May 13 00:36:10.915600 systemd-logind[1203]: Removed session 5. May 13 00:36:11.361053 kubelet[2006]: E0513 00:36:11.361014 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:17.441530 kubelet[2006]: E0513 00:36:17.441489 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:17.455676 kubelet[2006]: I0513 00:36:17.455622 2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=11.455607549 podStartE2EDuration="11.455607549s" podCreationTimestamp="2025-05-13 00:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:09.280483921 +0000 UTC m=+1.143601774" watchObservedRunningTime="2025-05-13 00:36:17.455607549 +0000 UTC m=+9.318725402" May 13 00:36:18.261374 kubelet[2006]: E0513 00:36:18.261339 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:19.350142 kubelet[2006]: E0513 00:36:19.349544 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:21.372204 kubelet[2006]: I0513 00:36:21.372168 2006 topology_manager.go:215] "Topology Admit Handler" podUID="fc531689-2d6d-49c7-b50b-848e8affcc20" podNamespace="kube-system" podName="cilium-nvjp2" May 13 00:36:21.372813 kubelet[2006]: I0513 00:36:21.372793 2006 topology_manager.go:215] "Topology Admit Handler" podUID="b90f95d4-f918-4622-990e-792bb174be6a" podNamespace="kube-system" podName="kube-proxy-6lpws" May 13 00:36:21.372938 kubelet[2006]: E0513 00:36:21.372797 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:21.378117 systemd[1]: Created slice kubepods-burstable-podfc531689_2d6d_49c7_b50b_848e8affcc20.slice. May 13 00:36:21.385198 systemd[1]: Created slice kubepods-besteffort-podb90f95d4_f918_4622_990e_792bb174be6a.slice. May 13 00:36:21.391736 kubelet[2006]: I0513 00:36:21.391714 2006 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:36:21.392327 env[1216]: time="2025-05-13T00:36:21.392284957Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:36:21.392612 kubelet[2006]: I0513 00:36:21.392474 2006 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:36:21.489149 kubelet[2006]: I0513 00:36:21.489096 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-bpf-maps\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489289 kubelet[2006]: I0513 00:36:21.489182 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-host-proc-sys-net\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489289 kubelet[2006]: I0513 00:36:21.489202 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc531689-2d6d-49c7-b50b-848e8affcc20-hubble-tls\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489289 kubelet[2006]: I0513 00:36:21.489220 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b90f95d4-f918-4622-990e-792bb174be6a-lib-modules\") pod \"kube-proxy-6lpws\" (UID: \"b90f95d4-f918-4622-990e-792bb174be6a\") " pod="kube-system/kube-proxy-6lpws" May 13 00:36:21.489289 kubelet[2006]: I0513 00:36:21.489236 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57cwb\" (UniqueName: \"kubernetes.io/projected/b90f95d4-f918-4622-990e-792bb174be6a-kube-api-access-57cwb\") pod \"kube-proxy-6lpws\" (UID: \"b90f95d4-f918-4622-990e-792bb174be6a\") " pod="kube-system/kube-proxy-6lpws" May 13 00:36:21.489289 kubelet[2006]: I0513 00:36:21.489252 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-run\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489289 kubelet[2006]: I0513 00:36:21.489267 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-etc-cni-netd\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489442 kubelet[2006]: I0513 00:36:21.489281 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-hostproc\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489442 kubelet[2006]: I0513 00:36:21.489303 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-host-proc-sys-kernel\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489442 kubelet[2006]: I0513 00:36:21.489321 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b90f95d4-f918-4622-990e-792bb174be6a-xtables-lock\") pod \"kube-proxy-6lpws\" (UID: \"b90f95d4-f918-4622-990e-792bb174be6a\") " pod="kube-system/kube-proxy-6lpws" May 13 00:36:21.489442 kubelet[2006]: I0513 00:36:21.489337 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc531689-2d6d-49c7-b50b-848e8affcc20-clustermesh-secrets\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489442 kubelet[2006]: I0513 00:36:21.489353 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-lib-modules\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489442 kubelet[2006]: I0513 00:36:21.489374 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-cgroup\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489575 kubelet[2006]: I0513 00:36:21.489389 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cni-path\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489575 kubelet[2006]: I0513 00:36:21.489403 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-xtables-lock\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489575 kubelet[2006]: I0513 00:36:21.489418 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-config-path\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489575 kubelet[2006]: I0513 00:36:21.489432 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gt72\" (UniqueName: \"kubernetes.io/projected/fc531689-2d6d-49c7-b50b-848e8affcc20-kube-api-access-4gt72\") pod \"cilium-nvjp2\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " pod="kube-system/cilium-nvjp2" May 13 00:36:21.489575 kubelet[2006]: I0513 00:36:21.489447 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b90f95d4-f918-4622-990e-792bb174be6a-kube-proxy\") pod \"kube-proxy-6lpws\" (UID: \"b90f95d4-f918-4622-990e-792bb174be6a\") " pod="kube-system/kube-proxy-6lpws" May 13 00:36:21.683961 kubelet[2006]: E0513 00:36:21.683809 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:21.685185 env[1216]: time="2025-05-13T00:36:21.684688260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nvjp2,Uid:fc531689-2d6d-49c7-b50b-848e8affcc20,Namespace:kube-system,Attempt:0,}" May 13 00:36:21.692850 kubelet[2006]: E0513 00:36:21.692804 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:21.694073 env[1216]: time="2025-05-13T00:36:21.693201266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6lpws,Uid:b90f95d4-f918-4622-990e-792bb174be6a,Namespace:kube-system,Attempt:0,}" May 13 00:36:21.702646 env[1216]: time="2025-05-13T00:36:21.702575464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:21.702646 env[1216]: time="2025-05-13T00:36:21.702623186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:21.702807 env[1216]: time="2025-05-13T00:36:21.702633586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:21.703031 env[1216]: time="2025-05-13T00:36:21.702992800Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719 pid=2099 runtime=io.containerd.runc.v2 May 13 00:36:21.706703 env[1216]: time="2025-05-13T00:36:21.706607018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:21.706826 env[1216]: time="2025-05-13T00:36:21.706690742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:21.706826 env[1216]: time="2025-05-13T00:36:21.706754904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:21.707398 env[1216]: time="2025-05-13T00:36:21.707020874Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f391ec7df39ecf40b976202e7fb04de9940d0437c6304b87ad06b799e2a80957 pid=2115 runtime=io.containerd.runc.v2 May 13 00:36:21.717114 systemd[1]: Started cri-containerd-b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719.scope. May 13 00:36:21.733790 systemd[1]: Started cri-containerd-f391ec7df39ecf40b976202e7fb04de9940d0437c6304b87ad06b799e2a80957.scope. May 13 00:36:21.759221 env[1216]: time="2025-05-13T00:36:21.759147668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nvjp2,Uid:fc531689-2d6d-49c7-b50b-848e8affcc20,Namespace:kube-system,Attempt:0,} returns sandbox id \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\"" May 13 00:36:21.760301 kubelet[2006]: E0513 00:36:21.760275 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:21.762263 env[1216]: time="2025-05-13T00:36:21.762219865Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:36:21.774091 env[1216]: time="2025-05-13T00:36:21.774045398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6lpws,Uid:b90f95d4-f918-4622-990e-792bb174be6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f391ec7df39ecf40b976202e7fb04de9940d0437c6304b87ad06b799e2a80957\"" May 13 00:36:21.774862 kubelet[2006]: E0513 00:36:21.774838 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:21.778394 env[1216]: time="2025-05-13T00:36:21.778354362Z" level=info msg="CreateContainer within sandbox \"f391ec7df39ecf40b976202e7fb04de9940d0437c6304b87ad06b799e2a80957\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:36:21.795125 env[1216]: time="2025-05-13T00:36:21.795064041Z" level=info msg="CreateContainer within sandbox \"f391ec7df39ecf40b976202e7fb04de9940d0437c6304b87ad06b799e2a80957\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70c9c0c74609203fe82beba40af013d2efa60677cf6704432635aebeefdeeaa8\"" May 13 00:36:21.795674 env[1216]: time="2025-05-13T00:36:21.795610982Z" level=info msg="StartContainer for \"70c9c0c74609203fe82beba40af013d2efa60677cf6704432635aebeefdeeaa8\"" May 13 00:36:21.812608 systemd[1]: Started cri-containerd-70c9c0c74609203fe82beba40af013d2efa60677cf6704432635aebeefdeeaa8.scope. May 13 00:36:21.882674 env[1216]: time="2025-05-13T00:36:21.882627830Z" level=info msg="StartContainer for \"70c9c0c74609203fe82beba40af013d2efa60677cf6704432635aebeefdeeaa8\" returns successfully" May 13 00:36:22.109775 kubelet[2006]: I0513 00:36:22.109724 2006 topology_manager.go:215] "Topology Admit Handler" podUID="e330849f-89c2-43aa-b1ff-22426d76c9fa" podNamespace="kube-system" podName="cilium-operator-599987898-c9qz5" May 13 00:36:22.115085 systemd[1]: Created slice kubepods-besteffort-pode330849f_89c2_43aa_b1ff_22426d76c9fa.slice. May 13 00:36:22.269773 kubelet[2006]: E0513 00:36:22.269741 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:22.272036 kubelet[2006]: E0513 00:36:22.272010 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:22.281397 kubelet[2006]: I0513 00:36:22.281342 2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6lpws" podStartSLOduration=1.281326345 podStartE2EDuration="1.281326345s" podCreationTimestamp="2025-05-13 00:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:22.28118254 +0000 UTC m=+14.144300393" watchObservedRunningTime="2025-05-13 00:36:22.281326345 +0000 UTC m=+14.144444198" May 13 00:36:22.295681 kubelet[2006]: I0513 00:36:22.295641 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rsmd\" (UniqueName: \"kubernetes.io/projected/e330849f-89c2-43aa-b1ff-22426d76c9fa-kube-api-access-8rsmd\") pod \"cilium-operator-599987898-c9qz5\" (UID: \"e330849f-89c2-43aa-b1ff-22426d76c9fa\") " pod="kube-system/cilium-operator-599987898-c9qz5" May 13 00:36:22.295908 kubelet[2006]: I0513 00:36:22.295888 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e330849f-89c2-43aa-b1ff-22426d76c9fa-cilium-config-path\") pod \"cilium-operator-599987898-c9qz5\" (UID: \"e330849f-89c2-43aa-b1ff-22426d76c9fa\") " pod="kube-system/cilium-operator-599987898-c9qz5" May 13 00:36:22.417732 kubelet[2006]: E0513 00:36:22.417376 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:22.418569 env[1216]: time="2025-05-13T00:36:22.418508490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-c9qz5,Uid:e330849f-89c2-43aa-b1ff-22426d76c9fa,Namespace:kube-system,Attempt:0,}" May 13 00:36:22.433300 env[1216]: time="2025-05-13T00:36:22.433236986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:22.433506 env[1216]: time="2025-05-13T00:36:22.433294068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:22.433506 env[1216]: time="2025-05-13T00:36:22.433306588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:22.433506 env[1216]: time="2025-05-13T00:36:22.433464114Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf pid=2339 runtime=io.containerd.runc.v2 May 13 00:36:22.443796 systemd[1]: Started cri-containerd-9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf.scope. May 13 00:36:22.481212 env[1216]: time="2025-05-13T00:36:22.481168607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-c9qz5,Uid:e330849f-89c2-43aa-b1ff-22426d76c9fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf\"" May 13 00:36:22.482101 kubelet[2006]: E0513 00:36:22.482077 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:22.959146 update_engine[1204]: I0513 00:36:22.958774 1204 update_attempter.cc:509] Updating boot flags... May 13 00:36:26.029431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3289766470.mount: Deactivated successfully. May 13 00:36:28.413316 env[1216]: time="2025-05-13T00:36:28.413261763Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:28.415221 env[1216]: time="2025-05-13T00:36:28.415184256Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:28.416630 env[1216]: time="2025-05-13T00:36:28.416596054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:28.418061 env[1216]: time="2025-05-13T00:36:28.418007732Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 00:36:28.424634 env[1216]: time="2025-05-13T00:36:28.424598311Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:36:28.425704 env[1216]: time="2025-05-13T00:36:28.425653020Z" level=info msg="CreateContainer within sandbox \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:36:28.436842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696614451.mount: Deactivated successfully. May 13 00:36:28.443900 env[1216]: time="2025-05-13T00:36:28.443851674Z" level=info msg="CreateContainer within sandbox \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6\"" May 13 00:36:28.444426 env[1216]: time="2025-05-13T00:36:28.444393689Z" level=info msg="StartContainer for \"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6\"" May 13 00:36:28.470286 systemd[1]: Started cri-containerd-e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6.scope. May 13 00:36:28.550030 env[1216]: time="2025-05-13T00:36:28.549973037Z" level=info msg="StartContainer for \"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6\" returns successfully" May 13 00:36:28.622570 systemd[1]: cri-containerd-e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6.scope: Deactivated successfully. May 13 00:36:28.721781 env[1216]: time="2025-05-13T00:36:28.721734582Z" level=info msg="shim disconnected" id=e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6 May 13 00:36:28.722372 env[1216]: time="2025-05-13T00:36:28.722350198Z" level=warning msg="cleaning up after shim disconnected" id=e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6 namespace=k8s.io May 13 00:36:28.722455 env[1216]: time="2025-05-13T00:36:28.722441561Z" level=info msg="cleaning up dead shim" May 13 00:36:28.730219 env[1216]: time="2025-05-13T00:36:28.730179331Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:36:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2436 runtime=io.containerd.runc.v2\n" May 13 00:36:29.288217 kubelet[2006]: E0513 00:36:29.288177 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:29.296668 env[1216]: time="2025-05-13T00:36:29.296628477Z" level=info msg="CreateContainer within sandbox \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:36:29.309249 env[1216]: time="2025-05-13T00:36:29.309198083Z" level=info msg="CreateContainer within sandbox \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d\"" May 13 00:36:29.311315 env[1216]: time="2025-05-13T00:36:29.311261577Z" level=info msg="StartContainer for \"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d\"" May 13 00:36:29.327434 systemd[1]: Started cri-containerd-c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d.scope. May 13 00:36:29.375020 env[1216]: time="2025-05-13T00:36:29.374953710Z" level=info msg="StartContainer for \"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d\" returns successfully" May 13 00:36:29.389027 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:36:29.389279 systemd[1]: Stopped systemd-sysctl.service. May 13 00:36:29.389453 systemd[1]: Stopping systemd-sysctl.service... May 13 00:36:29.391143 systemd[1]: Starting systemd-sysctl.service... May 13 00:36:29.395150 systemd[1]: cri-containerd-c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d.scope: Deactivated successfully. May 13 00:36:29.400188 systemd[1]: Finished systemd-sysctl.service. May 13 00:36:29.419949 env[1216]: time="2025-05-13T00:36:29.419836714Z" level=info msg="shim disconnected" id=c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d May 13 00:36:29.423014 env[1216]: time="2025-05-13T00:36:29.422961235Z" level=warning msg="cleaning up after shim disconnected" id=c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d namespace=k8s.io May 13 00:36:29.423290 env[1216]: time="2025-05-13T00:36:29.423271123Z" level=info msg="cleaning up dead shim" May 13 00:36:29.432377 env[1216]: time="2025-05-13T00:36:29.432324318Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:36:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2500 runtime=io.containerd.runc.v2\n" May 13 00:36:29.435388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6-rootfs.mount: Deactivated successfully. May 13 00:36:29.522816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3847043175.mount: Deactivated successfully. May 13 00:36:30.292150 kubelet[2006]: E0513 00:36:30.291321 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:30.299942 env[1216]: time="2025-05-13T00:36:30.298572737Z" level=info msg="CreateContainer within sandbox \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:36:30.311750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170523284.mount: Deactivated successfully. May 13 00:36:30.315579 env[1216]: time="2025-05-13T00:36:30.315527637Z" level=info msg="CreateContainer within sandbox \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437\"" May 13 00:36:30.317671 env[1216]: time="2025-05-13T00:36:30.317615049Z" level=info msg="StartContainer for \"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437\"" May 13 00:36:30.334721 systemd[1]: Started cri-containerd-2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437.scope. May 13 00:36:30.391132 env[1216]: time="2025-05-13T00:36:30.391080432Z" level=info msg="StartContainer for \"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437\" returns successfully" May 13 00:36:30.417731 systemd[1]: cri-containerd-2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437.scope: Deactivated successfully. May 13 00:36:30.425472 env[1216]: time="2025-05-13T00:36:30.425425444Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:30.427043 env[1216]: time="2025-05-13T00:36:30.427010083Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:30.428868 env[1216]: time="2025-05-13T00:36:30.428837489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:36:30.429407 env[1216]: time="2025-05-13T00:36:30.429354461Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 00:36:30.435421 env[1216]: time="2025-05-13T00:36:30.435367731Z" level=info msg="CreateContainer within sandbox \"9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:36:30.448058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437-rootfs.mount: Deactivated successfully. May 13 00:36:30.523084 env[1216]: time="2025-05-13T00:36:30.523016705Z" level=info msg="shim disconnected" id=2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437 May 13 00:36:30.523340 env[1216]: time="2025-05-13T00:36:30.523312073Z" level=warning msg="cleaning up after shim disconnected" id=2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437 namespace=k8s.io May 13 00:36:30.523405 env[1216]: time="2025-05-13T00:36:30.523391475Z" level=info msg="cleaning up dead shim" May 13 00:36:30.527263 env[1216]: time="2025-05-13T00:36:30.527206049Z" level=info msg="CreateContainer within sandbox \"9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\"" May 13 00:36:30.528241 env[1216]: time="2025-05-13T00:36:30.528202434Z" level=info msg="StartContainer for \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\"" May 13 00:36:30.531442 env[1216]: time="2025-05-13T00:36:30.531409153Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:36:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2556 runtime=io.containerd.runc.v2\n" May 13 00:36:30.543278 systemd[1]: Started cri-containerd-1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f.scope. May 13 00:36:30.596433 env[1216]: time="2025-05-13T00:36:30.596382045Z" level=info msg="StartContainer for \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\" returns successfully" May 13 00:36:31.293834 kubelet[2006]: E0513 00:36:31.293805 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:31.300960 kubelet[2006]: E0513 00:36:31.300937 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:31.302818 env[1216]: time="2025-05-13T00:36:31.302778369Z" level=info msg="CreateContainer within sandbox \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:36:31.319581 env[1216]: time="2025-05-13T00:36:31.319522127Z" level=info msg="CreateContainer within sandbox \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f\"" May 13 00:36:31.320218 env[1216]: time="2025-05-13T00:36:31.320183023Z" level=info msg="StartContainer for \"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f\"" May 13 00:36:31.331459 kubelet[2006]: I0513 00:36:31.331387 2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-c9qz5" podStartSLOduration=1.383303704 podStartE2EDuration="9.331369048s" podCreationTimestamp="2025-05-13 00:36:22 +0000 UTC" firstStartedPulling="2025-05-13 00:36:22.483531413 +0000 UTC m=+14.346649266" lastFinishedPulling="2025-05-13 00:36:30.431596797 +0000 UTC m=+22.294714610" observedRunningTime="2025-05-13 00:36:31.311673221 +0000 UTC m=+23.174791074" watchObservedRunningTime="2025-05-13 00:36:31.331369048 +0000 UTC m=+23.194486901" May 13 00:36:31.346973 systemd[1]: Started cri-containerd-b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f.scope. May 13 00:36:31.411641 systemd[1]: cri-containerd-b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f.scope: Deactivated successfully. May 13 00:36:31.412871 env[1216]: time="2025-05-13T00:36:31.412797022Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc531689_2d6d_49c7_b50b_848e8affcc20.slice/cri-containerd-b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f.scope/memory.events\": no such file or directory" May 13 00:36:31.416200 env[1216]: time="2025-05-13T00:36:31.416141981Z" level=info msg="StartContainer for \"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f\" returns successfully" May 13 00:36:31.439861 env[1216]: time="2025-05-13T00:36:31.439805183Z" level=info msg="shim disconnected" id=b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f May 13 00:36:31.440338 env[1216]: time="2025-05-13T00:36:31.440313035Z" level=warning msg="cleaning up after shim disconnected" id=b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f namespace=k8s.io May 13 00:36:31.440407 env[1216]: time="2025-05-13T00:36:31.440394237Z" level=info msg="cleaning up dead shim" May 13 00:36:31.447792 env[1216]: time="2025-05-13T00:36:31.447744452Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:36:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2651 runtime=io.containerd.runc.v2\n" May 13 00:36:32.311122 kubelet[2006]: E0513 00:36:32.311082 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:32.313582 kubelet[2006]: E0513 00:36:32.311443 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:32.318029 env[1216]: time="2025-05-13T00:36:32.317976037Z" level=info msg="CreateContainer within sandbox \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:36:32.374420 env[1216]: time="2025-05-13T00:36:32.374361040Z" level=info msg="CreateContainer within sandbox \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\"" May 13 00:36:32.375333 env[1216]: time="2025-05-13T00:36:32.375303421Z" level=info msg="StartContainer for \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\"" May 13 00:36:32.399369 systemd[1]: Started cri-containerd-9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3.scope. May 13 00:36:32.444337 env[1216]: time="2025-05-13T00:36:32.444276710Z" level=info msg="StartContainer for \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\" returns successfully" May 13 00:36:32.460340 systemd[1]: run-containerd-runc-k8s.io-9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3-runc.DmCPR7.mount: Deactivated successfully. May 13 00:36:32.593865 kubelet[2006]: I0513 00:36:32.593744 2006 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:36:32.648937 kubelet[2006]: I0513 00:36:32.648814 2006 topology_manager.go:215] "Topology Admit Handler" podUID="7126420f-19a3-42ff-ab5e-f245b8c2c01c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9g2pf" May 13 00:36:32.658968 kubelet[2006]: I0513 00:36:32.658915 2006 topology_manager.go:215] "Topology Admit Handler" podUID="4f436d87-d242-4cd4-b85c-d31268924f68" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rc5pp" May 13 00:36:32.667027 kubelet[2006]: I0513 00:36:32.666988 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8w2x\" (UniqueName: \"kubernetes.io/projected/7126420f-19a3-42ff-ab5e-f245b8c2c01c-kube-api-access-p8w2x\") pod \"coredns-7db6d8ff4d-9g2pf\" (UID: \"7126420f-19a3-42ff-ab5e-f245b8c2c01c\") " pod="kube-system/coredns-7db6d8ff4d-9g2pf" May 13 00:36:32.667256 kubelet[2006]: I0513 00:36:32.667233 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f436d87-d242-4cd4-b85c-d31268924f68-config-volume\") pod \"coredns-7db6d8ff4d-rc5pp\" (UID: \"4f436d87-d242-4cd4-b85c-d31268924f68\") " pod="kube-system/coredns-7db6d8ff4d-rc5pp" May 13 00:36:32.667351 kubelet[2006]: I0513 00:36:32.667337 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7126420f-19a3-42ff-ab5e-f245b8c2c01c-config-volume\") pod \"coredns-7db6d8ff4d-9g2pf\" (UID: \"7126420f-19a3-42ff-ab5e-f245b8c2c01c\") " pod="kube-system/coredns-7db6d8ff4d-9g2pf" May 13 00:36:32.667436 kubelet[2006]: I0513 00:36:32.667422 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7wgb\" (UniqueName: \"kubernetes.io/projected/4f436d87-d242-4cd4-b85c-d31268924f68-kube-api-access-m7wgb\") pod \"coredns-7db6d8ff4d-rc5pp\" (UID: \"4f436d87-d242-4cd4-b85c-d31268924f68\") " pod="kube-system/coredns-7db6d8ff4d-rc5pp" May 13 00:36:32.670364 systemd[1]: Created slice kubepods-burstable-pod7126420f_19a3_42ff_ab5e_f245b8c2c01c.slice. May 13 00:36:32.677999 systemd[1]: Created slice kubepods-burstable-pod4f436d87_d242_4cd4_b85c_d31268924f68.slice. May 13 00:36:32.795737 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 13 00:36:32.974730 kubelet[2006]: E0513 00:36:32.974670 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:32.975655 env[1216]: time="2025-05-13T00:36:32.975610955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9g2pf,Uid:7126420f-19a3-42ff-ab5e-f245b8c2c01c,Namespace:kube-system,Attempt:0,}" May 13 00:36:32.981941 kubelet[2006]: E0513 00:36:32.981895 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:32.982876 env[1216]: time="2025-05-13T00:36:32.982827799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rc5pp,Uid:4f436d87-d242-4cd4-b85c-d31268924f68,Namespace:kube-system,Attempt:0,}" May 13 00:36:33.112736 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 13 00:36:33.316442 kubelet[2006]: E0513 00:36:33.316343 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:33.330735 kubelet[2006]: I0513 00:36:33.330658 2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nvjp2" podStartSLOduration=5.668509516 podStartE2EDuration="12.330640041s" podCreationTimestamp="2025-05-13 00:36:21 +0000 UTC" firstStartedPulling="2025-05-13 00:36:21.761686885 +0000 UTC m=+13.624804738" lastFinishedPulling="2025-05-13 00:36:28.42381741 +0000 UTC m=+20.286935263" observedRunningTime="2025-05-13 00:36:33.330215472 +0000 UTC m=+25.193333405" watchObservedRunningTime="2025-05-13 00:36:33.330640041 +0000 UTC m=+25.193757894" May 13 00:36:33.905481 systemd[1]: Started sshd@5-10.0.0.111:22-10.0.0.1:38900.service. May 13 00:36:33.948105 sshd[2831]: Accepted publickey for core from 10.0.0.1 port 38900 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:36:33.949760 sshd[2831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:36:33.953044 systemd-logind[1203]: New session 6 of user core. May 13 00:36:33.953892 systemd[1]: Started session-6.scope. May 13 00:36:34.079980 sshd[2831]: pam_unix(sshd:session): session closed for user core May 13 00:36:34.082832 systemd[1]: sshd@5-10.0.0.111:22-10.0.0.1:38900.service: Deactivated successfully. May 13 00:36:34.083568 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:36:34.084076 systemd-logind[1203]: Session 6 logged out. Waiting for processes to exit. May 13 00:36:34.084705 systemd-logind[1203]: Removed session 6. May 13 00:36:34.319222 kubelet[2006]: E0513 00:36:34.317566 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:34.753772 systemd-networkd[1043]: cilium_host: Link UP May 13 00:36:34.755351 systemd-networkd[1043]: cilium_net: Link UP May 13 00:36:34.755478 systemd-networkd[1043]: cilium_net: Gained carrier May 13 00:36:34.755968 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 00:36:34.756024 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 00:36:34.755761 systemd-networkd[1043]: cilium_host: Gained carrier May 13 00:36:34.854591 systemd-networkd[1043]: cilium_vxlan: Link UP May 13 00:36:34.854599 systemd-networkd[1043]: cilium_vxlan: Gained carrier May 13 00:36:34.916906 systemd-networkd[1043]: cilium_net: Gained IPv6LL May 13 00:36:35.100807 systemd-networkd[1043]: cilium_host: Gained IPv6LL May 13 00:36:35.224722 kernel: NET: Registered PF_ALG protocol family May 13 00:36:35.319149 kubelet[2006]: E0513 00:36:35.319108 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:35.819527 systemd-networkd[1043]: lxc_health: Link UP May 13 00:36:35.827779 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:36:35.827836 systemd-networkd[1043]: lxc_health: Gained carrier May 13 00:36:35.996874 systemd-networkd[1043]: cilium_vxlan: Gained IPv6LL May 13 00:36:36.164361 systemd-networkd[1043]: lxca028fd73ec5f: Link UP May 13 00:36:36.172980 systemd-networkd[1043]: lxccf4a3ecc121b: Link UP May 13 00:36:36.181946 kernel: eth0: renamed from tmpb047d May 13 00:36:36.188721 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca028fd73ec5f: link becomes ready May 13 00:36:36.187729 systemd-networkd[1043]: lxca028fd73ec5f: Gained carrier May 13 00:36:36.189753 kernel: eth0: renamed from tmp41434 May 13 00:36:36.198727 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccf4a3ecc121b: link becomes ready May 13 00:36:36.198866 systemd-networkd[1043]: lxccf4a3ecc121b: Gained carrier May 13 00:36:37.661128 systemd-networkd[1043]: lxca028fd73ec5f: Gained IPv6LL May 13 00:36:37.688716 kubelet[2006]: E0513 00:36:37.688666 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:37.725053 systemd-networkd[1043]: lxc_health: Gained IPv6LL May 13 00:36:38.108917 systemd-networkd[1043]: lxccf4a3ecc121b: Gained IPv6LL May 13 00:36:39.084835 systemd[1]: Started sshd@6-10.0.0.111:22-10.0.0.1:38902.service. May 13 00:36:39.126523 sshd[3229]: Accepted publickey for core from 10.0.0.1 port 38902 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:36:39.127982 sshd[3229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:36:39.132535 systemd[1]: Started session-7.scope. May 13 00:36:39.133152 systemd-logind[1203]: New session 7 of user core. May 13 00:36:39.254817 sshd[3229]: pam_unix(sshd:session): session closed for user core May 13 00:36:39.257398 systemd[1]: sshd@6-10.0.0.111:22-10.0.0.1:38902.service: Deactivated successfully. May 13 00:36:39.258104 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:36:39.258613 systemd-logind[1203]: Session 7 logged out. Waiting for processes to exit. May 13 00:36:39.259484 systemd-logind[1203]: Removed session 7. May 13 00:36:39.791147 env[1216]: time="2025-05-13T00:36:39.791071393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:39.791147 env[1216]: time="2025-05-13T00:36:39.791114314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:39.791627 env[1216]: time="2025-05-13T00:36:39.791124794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:39.791627 env[1216]: time="2025-05-13T00:36:39.791299597Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b047d65c807ae5baf5ca8698fe86c1d69c6119e142f647982b5bdac5e1ec8f99 pid=3263 runtime=io.containerd.runc.v2 May 13 00:36:39.792353 env[1216]: time="2025-05-13T00:36:39.792299254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:36:39.794951 env[1216]: time="2025-05-13T00:36:39.792336575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:36:39.794951 env[1216]: time="2025-05-13T00:36:39.792351335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:36:39.794951 env[1216]: time="2025-05-13T00:36:39.792543219Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/41434a38944bad609355697bbc76bcb408d41a8b8cc2ed9c545d52eec61668b2 pid=3264 runtime=io.containerd.runc.v2 May 13 00:36:39.808829 systemd[1]: Started cri-containerd-b047d65c807ae5baf5ca8698fe86c1d69c6119e142f647982b5bdac5e1ec8f99.scope. May 13 00:36:39.818415 systemd[1]: Started cri-containerd-41434a38944bad609355697bbc76bcb408d41a8b8cc2ed9c545d52eec61668b2.scope. May 13 00:36:39.908687 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:36:39.910310 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:36:39.925164 env[1216]: time="2025-05-13T00:36:39.925125192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rc5pp,Uid:4f436d87-d242-4cd4-b85c-d31268924f68,Namespace:kube-system,Attempt:0,} returns sandbox id \"41434a38944bad609355697bbc76bcb408d41a8b8cc2ed9c545d52eec61668b2\"" May 13 00:36:39.926849 kubelet[2006]: E0513 00:36:39.926178 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:39.929009 env[1216]: time="2025-05-13T00:36:39.928969618Z" level=info msg="CreateContainer within sandbox \"41434a38944bad609355697bbc76bcb408d41a8b8cc2ed9c545d52eec61668b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:36:39.933633 env[1216]: time="2025-05-13T00:36:39.933598538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9g2pf,Uid:7126420f-19a3-42ff-ab5e-f245b8c2c01c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b047d65c807ae5baf5ca8698fe86c1d69c6119e142f647982b5bdac5e1ec8f99\"" May 13 00:36:39.934399 kubelet[2006]: E0513 00:36:39.934363 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:39.936898 env[1216]: time="2025-05-13T00:36:39.936819194Z" level=info msg="CreateContainer within sandbox \"b047d65c807ae5baf5ca8698fe86c1d69c6119e142f647982b5bdac5e1ec8f99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:36:39.948354 env[1216]: time="2025-05-13T00:36:39.948306072Z" level=info msg="CreateContainer within sandbox \"41434a38944bad609355697bbc76bcb408d41a8b8cc2ed9c545d52eec61668b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9fd2b46b32ab158dc9f5880f4ed5ae499235b24648a89f726b4cdb91f48b3803\"" May 13 00:36:39.949141 env[1216]: time="2025-05-13T00:36:39.949113406Z" level=info msg="StartContainer for \"9fd2b46b32ab158dc9f5880f4ed5ae499235b24648a89f726b4cdb91f48b3803\"" May 13 00:36:39.951803 env[1216]: time="2025-05-13T00:36:39.951713091Z" level=info msg="CreateContainer within sandbox \"b047d65c807ae5baf5ca8698fe86c1d69c6119e142f647982b5bdac5e1ec8f99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"beee171cf80e47ae81dbeff3d7e5ae4c042a7176242ab95a0c0f8db690f98799\"" May 13 00:36:39.953129 env[1216]: time="2025-05-13T00:36:39.952310022Z" level=info msg="StartContainer for \"beee171cf80e47ae81dbeff3d7e5ae4c042a7176242ab95a0c0f8db690f98799\"" May 13 00:36:39.975539 systemd[1]: Started cri-containerd-beee171cf80e47ae81dbeff3d7e5ae4c042a7176242ab95a0c0f8db690f98799.scope. May 13 00:36:39.977467 systemd[1]: Started cri-containerd-9fd2b46b32ab158dc9f5880f4ed5ae499235b24648a89f726b4cdb91f48b3803.scope. May 13 00:36:40.012285 env[1216]: time="2025-05-13T00:36:40.012157730Z" level=info msg="StartContainer for \"beee171cf80e47ae81dbeff3d7e5ae4c042a7176242ab95a0c0f8db690f98799\" returns successfully" May 13 00:36:40.014858 env[1216]: time="2025-05-13T00:36:40.014818454Z" level=info msg="StartContainer for \"9fd2b46b32ab158dc9f5880f4ed5ae499235b24648a89f726b4cdb91f48b3803\" returns successfully" May 13 00:36:40.330380 kubelet[2006]: E0513 00:36:40.330337 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:40.332345 kubelet[2006]: E0513 00:36:40.332320 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:40.340902 kubelet[2006]: I0513 00:36:40.340845 2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rc5pp" podStartSLOduration=18.340830338 podStartE2EDuration="18.340830338s" podCreationTimestamp="2025-05-13 00:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:40.340041765 +0000 UTC m=+32.203159618" watchObservedRunningTime="2025-05-13 00:36:40.340830338 +0000 UTC m=+32.203948191" May 13 00:36:40.363442 kubelet[2006]: I0513 00:36:40.363378 2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9g2pf" podStartSLOduration=18.363363235 podStartE2EDuration="18.363363235s" podCreationTimestamp="2025-05-13 00:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:36:40.351838562 +0000 UTC m=+32.214956415" watchObservedRunningTime="2025-05-13 00:36:40.363363235 +0000 UTC m=+32.226481088" May 13 00:36:40.794997 systemd[1]: run-containerd-runc-k8s.io-41434a38944bad609355697bbc76bcb408d41a8b8cc2ed9c545d52eec61668b2-runc.8lyJkw.mount: Deactivated successfully. May 13 00:36:41.333934 kubelet[2006]: E0513 00:36:41.333856 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:41.334268 kubelet[2006]: E0513 00:36:41.333994 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:42.338031 kubelet[2006]: E0513 00:36:42.337998 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:42.338447 kubelet[2006]: E0513 00:36:42.338130 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:44.259875 systemd[1]: Started sshd@7-10.0.0.111:22-10.0.0.1:55220.service. May 13 00:36:44.310404 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 55220 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:36:44.312034 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:36:44.317428 systemd-logind[1203]: New session 8 of user core. May 13 00:36:44.318219 systemd[1]: Started session-8.scope. May 13 00:36:44.489273 sshd[3411]: pam_unix(sshd:session): session closed for user core May 13 00:36:44.491925 systemd[1]: sshd@7-10.0.0.111:22-10.0.0.1:55220.service: Deactivated successfully. May 13 00:36:44.492640 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:36:44.493173 systemd-logind[1203]: Session 8 logged out. Waiting for processes to exit. May 13 00:36:44.494230 systemd-logind[1203]: Removed session 8. May 13 00:36:48.194030 kubelet[2006]: I0513 00:36:48.193994 2006 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:36:48.195233 kubelet[2006]: E0513 00:36:48.195209 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:48.350016 kubelet[2006]: E0513 00:36:48.349987 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:36:49.493963 systemd[1]: Started sshd@8-10.0.0.111:22-10.0.0.1:55228.service. May 13 00:36:49.533976 sshd[3427]: Accepted publickey for core from 10.0.0.1 port 55228 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:36:49.535605 sshd[3427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:36:49.539259 systemd-logind[1203]: New session 9 of user core. May 13 00:36:49.540154 systemd[1]: Started session-9.scope. May 13 00:36:49.659070 sshd[3427]: pam_unix(sshd:session): session closed for user core May 13 00:36:49.664331 systemd[1]: Started sshd@9-10.0.0.111:22-10.0.0.1:55242.service. May 13 00:36:49.665050 systemd[1]: sshd@8-10.0.0.111:22-10.0.0.1:55228.service: Deactivated successfully. May 13 00:36:49.666208 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:36:49.666996 systemd-logind[1203]: Session 9 logged out. Waiting for processes to exit. May 13 00:36:49.667919 systemd-logind[1203]: Removed session 9. May 13 00:36:49.705947 sshd[3440]: Accepted publickey for core from 10.0.0.1 port 55242 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:36:49.707268 sshd[3440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:36:49.710710 systemd-logind[1203]: New session 10 of user core. May 13 00:36:49.711605 systemd[1]: Started session-10.scope. May 13 00:36:49.887889 systemd[1]: Started sshd@10-10.0.0.111:22-10.0.0.1:55250.service. May 13 00:36:49.893443 sshd[3440]: pam_unix(sshd:session): session closed for user core May 13 00:36:49.901217 systemd[1]: sshd@9-10.0.0.111:22-10.0.0.1:55242.service: Deactivated successfully. May 13 00:36:49.903250 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:36:49.903873 systemd-logind[1203]: Session 10 logged out. Waiting for processes to exit. May 13 00:36:49.904703 systemd-logind[1203]: Removed session 10. May 13 00:36:49.933954 sshd[3453]: Accepted publickey for core from 10.0.0.1 port 55250 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:36:49.935388 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:36:49.938793 systemd-logind[1203]: New session 11 of user core. May 13 00:36:49.939645 systemd[1]: Started session-11.scope. May 13 00:36:50.052244 sshd[3453]: pam_unix(sshd:session): session closed for user core May 13 00:36:50.054720 systemd[1]: sshd@10-10.0.0.111:22-10.0.0.1:55250.service: Deactivated successfully. May 13 00:36:50.055582 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:36:50.056177 systemd-logind[1203]: Session 11 logged out. Waiting for processes to exit. May 13 00:36:50.059923 systemd-logind[1203]: Removed session 11. May 13 00:36:55.057402 systemd[1]: Started sshd@11-10.0.0.111:22-10.0.0.1:40680.service. May 13 00:36:55.099653 sshd[3470]: Accepted publickey for core from 10.0.0.1 port 40680 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:36:55.100889 sshd[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:36:55.104952 systemd-logind[1203]: New session 12 of user core. May 13 00:36:55.105790 systemd[1]: Started session-12.scope. May 13 00:36:55.223634 sshd[3470]: pam_unix(sshd:session): session closed for user core May 13 00:36:55.226313 systemd[1]: sshd@11-10.0.0.111:22-10.0.0.1:40680.service: Deactivated successfully. May 13 00:36:55.227055 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:36:55.227767 systemd-logind[1203]: Session 12 logged out. Waiting for processes to exit. May 13 00:36:55.228915 systemd-logind[1203]: Removed session 12. May 13 00:37:00.229517 systemd[1]: Started sshd@12-10.0.0.111:22-10.0.0.1:40688.service. May 13 00:37:00.273665 sshd[3483]: Accepted publickey for core from 10.0.0.1 port 40688 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:00.275012 sshd[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:00.279346 systemd-logind[1203]: New session 13 of user core. May 13 00:37:00.280253 systemd[1]: Started session-13.scope. May 13 00:37:00.414304 sshd[3483]: pam_unix(sshd:session): session closed for user core May 13 00:37:00.418670 systemd[1]: sshd@12-10.0.0.111:22-10.0.0.1:40688.service: Deactivated successfully. May 13 00:37:00.419339 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:37:00.421058 systemd-logind[1203]: Session 13 logged out. Waiting for processes to exit. May 13 00:37:00.422315 systemd[1]: Started sshd@13-10.0.0.111:22-10.0.0.1:40690.service. May 13 00:37:00.423266 systemd-logind[1203]: Removed session 13. May 13 00:37:00.462994 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 40690 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:00.464302 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:00.467929 systemd-logind[1203]: New session 14 of user core. May 13 00:37:00.468864 systemd[1]: Started session-14.scope. May 13 00:37:00.764036 sshd[3497]: pam_unix(sshd:session): session closed for user core May 13 00:37:00.767650 systemd[1]: Started sshd@14-10.0.0.111:22-10.0.0.1:40706.service. May 13 00:37:00.768241 systemd[1]: sshd@13-10.0.0.111:22-10.0.0.1:40690.service: Deactivated successfully. May 13 00:37:00.768964 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:37:00.769848 systemd-logind[1203]: Session 14 logged out. Waiting for processes to exit. May 13 00:37:00.770889 systemd-logind[1203]: Removed session 14. May 13 00:37:00.810333 sshd[3507]: Accepted publickey for core from 10.0.0.1 port 40706 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:00.813179 sshd[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:00.820257 systemd-logind[1203]: New session 15 of user core. May 13 00:37:00.820747 systemd[1]: Started session-15.scope. May 13 00:37:02.279922 sshd[3507]: pam_unix(sshd:session): session closed for user core May 13 00:37:02.283644 systemd[1]: Started sshd@15-10.0.0.111:22-10.0.0.1:40720.service. May 13 00:37:02.285354 systemd-logind[1203]: Session 15 logged out. Waiting for processes to exit. May 13 00:37:02.285452 systemd[1]: sshd@14-10.0.0.111:22-10.0.0.1:40706.service: Deactivated successfully. May 13 00:37:02.286625 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:37:02.288785 systemd-logind[1203]: Removed session 15. May 13 00:37:02.331196 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 40720 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:02.332578 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:02.335998 systemd-logind[1203]: New session 16 of user core. May 13 00:37:02.336892 systemd[1]: Started session-16.scope. May 13 00:37:02.568658 sshd[3525]: pam_unix(sshd:session): session closed for user core May 13 00:37:02.572509 systemd[1]: Started sshd@16-10.0.0.111:22-10.0.0.1:40704.service. May 13 00:37:02.573006 systemd[1]: sshd@15-10.0.0.111:22-10.0.0.1:40720.service: Deactivated successfully. May 13 00:37:02.573784 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:37:02.574428 systemd-logind[1203]: Session 16 logged out. Waiting for processes to exit. May 13 00:37:02.575551 systemd-logind[1203]: Removed session 16. May 13 00:37:02.615090 sshd[3538]: Accepted publickey for core from 10.0.0.1 port 40704 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:02.617868 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:02.621272 systemd-logind[1203]: New session 17 of user core. May 13 00:37:02.622207 systemd[1]: Started session-17.scope. May 13 00:37:02.751184 sshd[3538]: pam_unix(sshd:session): session closed for user core May 13 00:37:02.753799 systemd-logind[1203]: Session 17 logged out. Waiting for processes to exit. May 13 00:37:02.753908 systemd[1]: sshd@16-10.0.0.111:22-10.0.0.1:40704.service: Deactivated successfully. May 13 00:37:02.754577 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:37:02.758744 systemd-logind[1203]: Removed session 17. May 13 00:37:07.756367 systemd[1]: Started sshd@17-10.0.0.111:22-10.0.0.1:40714.service. May 13 00:37:07.796372 sshd[3555]: Accepted publickey for core from 10.0.0.1 port 40714 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:07.797713 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:07.801761 systemd-logind[1203]: New session 18 of user core. May 13 00:37:07.802408 systemd[1]: Started session-18.scope. May 13 00:37:07.927416 sshd[3555]: pam_unix(sshd:session): session closed for user core May 13 00:37:07.931022 systemd[1]: sshd@17-10.0.0.111:22-10.0.0.1:40714.service: Deactivated successfully. May 13 00:37:07.931935 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:37:07.932669 systemd-logind[1203]: Session 18 logged out. Waiting for processes to exit. May 13 00:37:07.933624 systemd-logind[1203]: Removed session 18. May 13 00:37:12.937170 systemd[1]: Started sshd@18-10.0.0.111:22-10.0.0.1:44760.service. May 13 00:37:12.979439 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 44760 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:12.980966 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:12.986381 systemd-logind[1203]: New session 19 of user core. May 13 00:37:12.987380 systemd[1]: Started session-19.scope. May 13 00:37:13.102323 sshd[3571]: pam_unix(sshd:session): session closed for user core May 13 00:37:13.105123 systemd[1]: sshd@18-10.0.0.111:22-10.0.0.1:44760.service: Deactivated successfully. May 13 00:37:13.105961 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:37:13.106479 systemd-logind[1203]: Session 19 logged out. Waiting for processes to exit. May 13 00:37:13.107174 systemd-logind[1203]: Removed session 19. May 13 00:37:18.106973 systemd[1]: Started sshd@19-10.0.0.111:22-10.0.0.1:44764.service. May 13 00:37:18.146994 sshd[3584]: Accepted publickey for core from 10.0.0.1 port 44764 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:18.148278 sshd[3584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:18.151461 systemd-logind[1203]: New session 20 of user core. May 13 00:37:18.152442 systemd[1]: Started session-20.scope. May 13 00:37:18.260396 sshd[3584]: pam_unix(sshd:session): session closed for user core May 13 00:37:18.263743 systemd-logind[1203]: Session 20 logged out. Waiting for processes to exit. May 13 00:37:18.264806 systemd[1]: Started sshd@20-10.0.0.111:22-10.0.0.1:44778.service. May 13 00:37:18.265324 systemd[1]: sshd@19-10.0.0.111:22-10.0.0.1:44764.service: Deactivated successfully. May 13 00:37:18.265984 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:37:18.266729 systemd-logind[1203]: Removed session 20. May 13 00:37:18.304280 sshd[3596]: Accepted publickey for core from 10.0.0.1 port 44778 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:18.305683 sshd[3596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:18.309006 systemd-logind[1203]: New session 21 of user core. May 13 00:37:18.309691 systemd[1]: Started session-21.scope. May 13 00:37:20.374852 env[1216]: time="2025-05-13T00:37:20.374795357Z" level=info msg="StopContainer for \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\" with timeout 30 (s)" May 13 00:37:20.375215 env[1216]: time="2025-05-13T00:37:20.375182039Z" level=info msg="Stop container \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\" with signal terminated" May 13 00:37:20.389126 systemd[1]: cri-containerd-1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f.scope: Deactivated successfully. May 13 00:37:20.396718 systemd[1]: run-containerd-runc-k8s.io-9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3-runc.ZuSOso.mount: Deactivated successfully. May 13 00:37:20.417751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f-rootfs.mount: Deactivated successfully. May 13 00:37:20.428063 env[1216]: time="2025-05-13T00:37:20.427814956Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:37:20.432670 env[1216]: time="2025-05-13T00:37:20.432620348Z" level=info msg="shim disconnected" id=1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f May 13 00:37:20.432670 env[1216]: time="2025-05-13T00:37:20.432666029Z" level=warning msg="cleaning up after shim disconnected" id=1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f namespace=k8s.io May 13 00:37:20.432670 env[1216]: time="2025-05-13T00:37:20.432676549Z" level=info msg="cleaning up dead shim" May 13 00:37:20.433413 env[1216]: time="2025-05-13T00:37:20.433383554Z" level=info msg="StopContainer for \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\" with timeout 2 (s)" May 13 00:37:20.433835 env[1216]: time="2025-05-13T00:37:20.433806356Z" level=info msg="Stop container \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\" with signal terminated" May 13 00:37:20.443076 systemd-networkd[1043]: lxc_health: Link DOWN May 13 00:37:20.443085 systemd-networkd[1043]: lxc_health: Lost carrier May 13 00:37:20.444066 env[1216]: time="2025-05-13T00:37:20.444032426Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3645 runtime=io.containerd.runc.v2\n" May 13 00:37:20.447591 env[1216]: time="2025-05-13T00:37:20.447409569Z" level=info msg="StopContainer for \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\" returns successfully" May 13 00:37:20.448039 env[1216]: time="2025-05-13T00:37:20.447994293Z" level=info msg="StopPodSandbox for \"9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf\"" May 13 00:37:20.448112 env[1216]: time="2025-05-13T00:37:20.448059933Z" level=info msg="Container to stop \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:20.449767 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf-shm.mount: Deactivated successfully. May 13 00:37:20.463565 systemd[1]: cri-containerd-9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf.scope: Deactivated successfully. May 13 00:37:20.476972 systemd[1]: cri-containerd-9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3.scope: Deactivated successfully. May 13 00:37:20.477286 systemd[1]: cri-containerd-9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3.scope: Consumed 6.626s CPU time. May 13 00:37:20.493369 env[1216]: time="2025-05-13T00:37:20.493320040Z" level=info msg="shim disconnected" id=9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf May 13 00:37:20.493369 env[1216]: time="2025-05-13T00:37:20.493365600Z" level=warning msg="cleaning up after shim disconnected" id=9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf namespace=k8s.io May 13 00:37:20.493369 env[1216]: time="2025-05-13T00:37:20.493375840Z" level=info msg="cleaning up dead shim" May 13 00:37:20.502200 env[1216]: time="2025-05-13T00:37:20.502154339Z" level=info msg="shim disconnected" id=9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3 May 13 00:37:20.502200 env[1216]: time="2025-05-13T00:37:20.502202020Z" level=warning msg="cleaning up after shim disconnected" id=9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3 namespace=k8s.io May 13 00:37:20.502413 env[1216]: time="2025-05-13T00:37:20.502213180Z" level=info msg="cleaning up dead shim" May 13 00:37:20.502946 env[1216]: time="2025-05-13T00:37:20.502918625Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3700 runtime=io.containerd.runc.v2\n" May 13 00:37:20.503242 env[1216]: time="2025-05-13T00:37:20.503219667Z" level=info msg="TearDown network for sandbox \"9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf\" successfully" May 13 00:37:20.503284 env[1216]: time="2025-05-13T00:37:20.503243787Z" level=info msg="StopPodSandbox for \"9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf\" returns successfully" May 13 00:37:20.511171 env[1216]: time="2025-05-13T00:37:20.510376435Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3713 runtime=io.containerd.runc.v2\n" May 13 00:37:20.516098 env[1216]: time="2025-05-13T00:37:20.512907452Z" level=info msg="StopContainer for \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\" returns successfully" May 13 00:37:20.516098 env[1216]: time="2025-05-13T00:37:20.513301455Z" level=info msg="StopPodSandbox for \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\"" May 13 00:37:20.516098 env[1216]: time="2025-05-13T00:37:20.513357255Z" level=info msg="Container to stop \"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:20.516098 env[1216]: time="2025-05-13T00:37:20.513372615Z" level=info msg="Container to stop \"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:20.516098 env[1216]: time="2025-05-13T00:37:20.513383535Z" level=info msg="Container to stop \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:20.516098 env[1216]: time="2025-05-13T00:37:20.513396136Z" level=info msg="Container to stop \"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:20.516098 env[1216]: time="2025-05-13T00:37:20.513406856Z" level=info msg="Container to stop \"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:37:20.520337 systemd[1]: cri-containerd-b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719.scope: Deactivated successfully. May 13 00:37:20.549459 env[1216]: time="2025-05-13T00:37:20.549407979Z" level=info msg="shim disconnected" id=b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719 May 13 00:37:20.550019 env[1216]: time="2025-05-13T00:37:20.549991703Z" level=warning msg="cleaning up after shim disconnected" id=b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719 namespace=k8s.io May 13 00:37:20.550213 env[1216]: time="2025-05-13T00:37:20.550106744Z" level=info msg="cleaning up dead shim" May 13 00:37:20.558033 env[1216]: time="2025-05-13T00:37:20.557990318Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3744 runtime=io.containerd.runc.v2\n" May 13 00:37:20.558545 env[1216]: time="2025-05-13T00:37:20.558494281Z" level=info msg="TearDown network for sandbox \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" successfully" May 13 00:37:20.558648 env[1216]: time="2025-05-13T00:37:20.558629242Z" level=info msg="StopPodSandbox for \"b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719\" returns successfully" May 13 00:37:20.666307 kubelet[2006]: I0513 00:37:20.665347 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-run\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666307 kubelet[2006]: I0513 00:37:20.665400 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-bpf-maps\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666307 kubelet[2006]: I0513 00:37:20.665418 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-cgroup\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666674 kubelet[2006]: I0513 00:37:20.666638 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rsmd\" (UniqueName: \"kubernetes.io/projected/e330849f-89c2-43aa-b1ff-22426d76c9fa-kube-api-access-8rsmd\") pod \"e330849f-89c2-43aa-b1ff-22426d76c9fa\" (UID: \"e330849f-89c2-43aa-b1ff-22426d76c9fa\") " May 13 00:37:20.666724 kubelet[2006]: I0513 00:37:20.666674 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cni-path\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666724 kubelet[2006]: I0513 00:37:20.666703 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gt72\" (UniqueName: \"kubernetes.io/projected/fc531689-2d6d-49c7-b50b-848e8affcc20-kube-api-access-4gt72\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666724 kubelet[2006]: I0513 00:37:20.666721 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-host-proc-sys-kernel\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666806 kubelet[2006]: I0513 00:37:20.666740 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc531689-2d6d-49c7-b50b-848e8affcc20-clustermesh-secrets\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666806 kubelet[2006]: I0513 00:37:20.666758 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-config-path\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666806 kubelet[2006]: I0513 00:37:20.666776 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-host-proc-sys-net\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666806 kubelet[2006]: I0513 00:37:20.666792 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc531689-2d6d-49c7-b50b-848e8affcc20-hubble-tls\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666806 kubelet[2006]: I0513 00:37:20.666806 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-etc-cni-netd\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666936 kubelet[2006]: I0513 00:37:20.666822 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-hostproc\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666936 kubelet[2006]: I0513 00:37:20.666838 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-lib-modules\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666936 kubelet[2006]: I0513 00:37:20.666877 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-xtables-lock\") pod \"fc531689-2d6d-49c7-b50b-848e8affcc20\" (UID: \"fc531689-2d6d-49c7-b50b-848e8affcc20\") " May 13 00:37:20.666936 kubelet[2006]: I0513 00:37:20.666893 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e330849f-89c2-43aa-b1ff-22426d76c9fa-cilium-config-path\") pod \"e330849f-89c2-43aa-b1ff-22426d76c9fa\" (UID: \"e330849f-89c2-43aa-b1ff-22426d76c9fa\") " May 13 00:37:20.670250 kubelet[2006]: I0513 00:37:20.670209 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:20.670336 kubelet[2006]: I0513 00:37:20.670213 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:20.670598 kubelet[2006]: I0513 00:37:20.670382 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:20.670598 kubelet[2006]: I0513 00:37:20.670436 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:20.670598 kubelet[2006]: I0513 00:37:20.670453 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:20.674443 kubelet[2006]: I0513 00:37:20.674264 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:37:20.674443 kubelet[2006]: I0513 00:37:20.674319 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:20.675390 kubelet[2006]: I0513 00:37:20.675363 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc531689-2d6d-49c7-b50b-848e8affcc20-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:37:20.676403 kubelet[2006]: I0513 00:37:20.676358 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-hostproc" (OuterVolumeSpecName: "hostproc") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:20.676480 kubelet[2006]: I0513 00:37:20.676408 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:20.676480 kubelet[2006]: I0513 00:37:20.676429 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:20.676730 kubelet[2006]: I0513 00:37:20.676708 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc531689-2d6d-49c7-b50b-848e8affcc20-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:37:20.676777 kubelet[2006]: I0513 00:37:20.676755 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cni-path" (OuterVolumeSpecName: "cni-path") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:20.677079 kubelet[2006]: I0513 00:37:20.677044 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc531689-2d6d-49c7-b50b-848e8affcc20-kube-api-access-4gt72" (OuterVolumeSpecName: "kube-api-access-4gt72") pod "fc531689-2d6d-49c7-b50b-848e8affcc20" (UID: "fc531689-2d6d-49c7-b50b-848e8affcc20"). InnerVolumeSpecName "kube-api-access-4gt72". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:37:20.677208 kubelet[2006]: I0513 00:37:20.677180 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e330849f-89c2-43aa-b1ff-22426d76c9fa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e330849f-89c2-43aa-b1ff-22426d76c9fa" (UID: "e330849f-89c2-43aa-b1ff-22426d76c9fa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:37:20.678764 kubelet[2006]: I0513 00:37:20.678738 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e330849f-89c2-43aa-b1ff-22426d76c9fa-kube-api-access-8rsmd" (OuterVolumeSpecName: "kube-api-access-8rsmd") pod "e330849f-89c2-43aa-b1ff-22426d76c9fa" (UID: "e330849f-89c2-43aa-b1ff-22426d76c9fa"). InnerVolumeSpecName "kube-api-access-8rsmd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:37:20.767388 kubelet[2006]: I0513 00:37:20.767339 2006 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.767594 kubelet[2006]: I0513 00:37:20.767579 2006 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.767670 kubelet[2006]: I0513 00:37:20.767658 2006 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8rsmd\" (UniqueName: \"kubernetes.io/projected/e330849f-89c2-43aa-b1ff-22426d76c9fa-kube-api-access-8rsmd\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.767745 kubelet[2006]: I0513 00:37:20.767734 2006 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.767803 kubelet[2006]: I0513 00:37:20.767793 2006 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4gt72\" (UniqueName: \"kubernetes.io/projected/fc531689-2d6d-49c7-b50b-848e8affcc20-kube-api-access-4gt72\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.767855 kubelet[2006]: I0513 00:37:20.767846 2006 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.767909 kubelet[2006]: I0513 00:37:20.767899 2006 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc531689-2d6d-49c7-b50b-848e8affcc20-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.767966 kubelet[2006]: I0513 00:37:20.767955 2006 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.768072 kubelet[2006]: I0513 00:37:20.768061 2006 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.768127 kubelet[2006]: I0513 00:37:20.768118 2006 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc531689-2d6d-49c7-b50b-848e8affcc20-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.768184 kubelet[2006]: I0513 00:37:20.768174 2006 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.768239 kubelet[2006]: I0513 00:37:20.768229 2006 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.768296 kubelet[2006]: I0513 00:37:20.768287 2006 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.768357 kubelet[2006]: I0513 00:37:20.768347 2006 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.768408 kubelet[2006]: I0513 00:37:20.768398 2006 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e330849f-89c2-43aa-b1ff-22426d76c9fa-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:37:20.768462 kubelet[2006]: I0513 00:37:20.768453 2006 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc531689-2d6d-49c7-b50b-848e8affcc20-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:37:21.392265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3-rootfs.mount: Deactivated successfully. May 13 00:37:21.392365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bac253091803087e855f32c84be8c3663ebfabd0bfd7fb2df9792e8c2867ccf-rootfs.mount: Deactivated successfully. May 13 00:37:21.392432 systemd[1]: var-lib-kubelet-pods-e330849f\x2d89c2\x2d43aa\x2db1ff\x2d22426d76c9fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8rsmd.mount: Deactivated successfully. May 13 00:37:21.392489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719-rootfs.mount: Deactivated successfully. May 13 00:37:21.392544 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b74de646fdc88826d400a69d25e468041a336afb81bfb80fd209b40c19575719-shm.mount: Deactivated successfully. May 13 00:37:21.392593 systemd[1]: var-lib-kubelet-pods-fc531689\x2d2d6d\x2d49c7\x2db50b\x2d848e8affcc20-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4gt72.mount: Deactivated successfully. May 13 00:37:21.392644 systemd[1]: var-lib-kubelet-pods-fc531689\x2d2d6d\x2d49c7\x2db50b\x2d848e8affcc20-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:37:21.392706 systemd[1]: var-lib-kubelet-pods-fc531689\x2d2d6d\x2d49c7\x2db50b\x2d848e8affcc20-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:37:21.433838 kubelet[2006]: I0513 00:37:21.433786 2006 scope.go:117] "RemoveContainer" containerID="9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3" May 13 00:37:21.434908 env[1216]: time="2025-05-13T00:37:21.434870349Z" level=info msg="RemoveContainer for \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\"" May 13 00:37:21.437826 env[1216]: time="2025-05-13T00:37:21.437792009Z" level=info msg="RemoveContainer for \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\" returns successfully" May 13 00:37:21.438049 kubelet[2006]: I0513 00:37:21.438028 2006 scope.go:117] "RemoveContainer" containerID="b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f" May 13 00:37:21.439276 systemd[1]: Removed slice kubepods-besteffort-pode330849f_89c2_43aa_b1ff_22426d76c9fa.slice. May 13 00:37:21.443111 env[1216]: time="2025-05-13T00:37:21.440339786Z" level=info msg="RemoveContainer for \"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f\"" May 13 00:37:21.441381 systemd[1]: Removed slice kubepods-burstable-podfc531689_2d6d_49c7_b50b_848e8affcc20.slice. May 13 00:37:21.441464 systemd[1]: kubepods-burstable-podfc531689_2d6d_49c7_b50b_848e8affcc20.slice: Consumed 6.938s CPU time. May 13 00:37:21.444119 env[1216]: time="2025-05-13T00:37:21.443450727Z" level=info msg="RemoveContainer for \"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f\" returns successfully" May 13 00:37:21.444212 kubelet[2006]: I0513 00:37:21.443616 2006 scope.go:117] "RemoveContainer" containerID="2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437" May 13 00:37:21.444765 env[1216]: time="2025-05-13T00:37:21.444389014Z" level=info msg="RemoveContainer for \"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437\"" May 13 00:37:21.447021 env[1216]: time="2025-05-13T00:37:21.446991391Z" level=info msg="RemoveContainer for \"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437\" returns successfully" May 13 00:37:21.447315 kubelet[2006]: I0513 00:37:21.447288 2006 scope.go:117] "RemoveContainer" containerID="c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d" May 13 00:37:21.449057 env[1216]: time="2025-05-13T00:37:21.449020445Z" level=info msg="RemoveContainer for \"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d\"" May 13 00:37:21.452089 env[1216]: time="2025-05-13T00:37:21.452044026Z" level=info msg="RemoveContainer for \"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d\" returns successfully" May 13 00:37:21.452243 kubelet[2006]: I0513 00:37:21.452213 2006 scope.go:117] "RemoveContainer" containerID="e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6" May 13 00:37:21.453250 env[1216]: time="2025-05-13T00:37:21.453214514Z" level=info msg="RemoveContainer for \"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6\"" May 13 00:37:21.457516 env[1216]: time="2025-05-13T00:37:21.457466423Z" level=info msg="RemoveContainer for \"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6\" returns successfully" May 13 00:37:21.457686 kubelet[2006]: I0513 00:37:21.457654 2006 scope.go:117] "RemoveContainer" containerID="9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3" May 13 00:37:21.457950 env[1216]: time="2025-05-13T00:37:21.457869985Z" level=error msg="ContainerStatus for \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\": not found" May 13 00:37:21.459437 kubelet[2006]: E0513 00:37:21.459393 2006 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\": not found" containerID="9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3" May 13 00:37:21.459526 kubelet[2006]: I0513 00:37:21.459441 2006 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3"} err="failed to get container status \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c9d65e94d25752fdf83ec0d5f254ff76807612e163c9874b8cfc44fe6b5e7c3\": not found" May 13 00:37:21.459570 kubelet[2006]: I0513 00:37:21.459535 2006 scope.go:117] "RemoveContainer" containerID="b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f" May 13 00:37:21.459869 env[1216]: time="2025-05-13T00:37:21.459818479Z" level=error msg="ContainerStatus for \"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f\": not found" May 13 00:37:21.460012 kubelet[2006]: E0513 00:37:21.459967 2006 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f\": not found" containerID="b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f" May 13 00:37:21.460044 kubelet[2006]: I0513 00:37:21.460020 2006 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f"} err="failed to get container status \"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1ec3b1c43ae67adcfd334525baacb88d5b6ba66873201ea73f32e3e35fdab5f\": not found" May 13 00:37:21.460044 kubelet[2006]: I0513 00:37:21.460037 2006 scope.go:117] "RemoveContainer" containerID="2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437" May 13 00:37:21.460418 env[1216]: time="2025-05-13T00:37:21.460364242Z" level=error msg="ContainerStatus for \"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437\": not found" May 13 00:37:21.460509 kubelet[2006]: E0513 00:37:21.460490 2006 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437\": not found" containerID="2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437" May 13 00:37:21.460546 kubelet[2006]: I0513 00:37:21.460516 2006 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437"} err="failed to get container status \"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437\": rpc error: code = NotFound desc = an error occurred when try to find container \"2afba5adce0a0123b678639dd633945d4858c123783858d1973814f30190d437\": not found" May 13 00:37:21.460573 kubelet[2006]: I0513 00:37:21.460548 2006 scope.go:117] "RemoveContainer" containerID="c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d" May 13 00:37:21.460722 env[1216]: time="2025-05-13T00:37:21.460669325Z" level=error msg="ContainerStatus for \"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d\": not found" May 13 00:37:21.460828 kubelet[2006]: E0513 00:37:21.460810 2006 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d\": not found" containerID="c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d" May 13 00:37:21.460862 kubelet[2006]: I0513 00:37:21.460834 2006 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d"} err="failed to get container status \"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1fc4973ad12f24341cdbd474c8b037f9e477d9858e98a2a81a80e956a96177d\": not found" May 13 00:37:21.460862 kubelet[2006]: I0513 00:37:21.460849 2006 scope.go:117] "RemoveContainer" containerID="e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6" May 13 00:37:21.461503 env[1216]: time="2025-05-13T00:37:21.461439130Z" level=error msg="ContainerStatus for \"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6\": not found" May 13 00:37:21.461644 kubelet[2006]: E0513 00:37:21.461615 2006 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6\": not found" containerID="e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6" May 13 00:37:21.461682 kubelet[2006]: I0513 00:37:21.461642 2006 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6"} err="failed to get container status \"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7bf755e85a418dbd0a0fb54f45aafad1519cb2dbe306ba29929fdd8a454cea6\": not found" May 13 00:37:21.461682 kubelet[2006]: I0513 00:37:21.461661 2006 scope.go:117] "RemoveContainer" containerID="1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f" May 13 00:37:21.462706 env[1216]: time="2025-05-13T00:37:21.462657898Z" level=info msg="RemoveContainer for \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\"" May 13 00:37:21.464957 env[1216]: time="2025-05-13T00:37:21.464918953Z" level=info msg="RemoveContainer for \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\" returns successfully" May 13 00:37:21.465153 kubelet[2006]: I0513 00:37:21.465128 2006 scope.go:117] "RemoveContainer" containerID="1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f" May 13 00:37:21.465373 env[1216]: time="2025-05-13T00:37:21.465321516Z" level=error msg="ContainerStatus for \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\": not found" May 13 00:37:21.465468 kubelet[2006]: E0513 00:37:21.465444 2006 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\": not found" containerID="1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f" May 13 00:37:21.465501 kubelet[2006]: I0513 00:37:21.465474 2006 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f"} err="failed to get container status \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ddce03b8d5ea40bd494b266857ebc77193caaa637e2f57c4341e209f256913f\": not found" May 13 00:37:22.239251 kubelet[2006]: I0513 00:37:22.239216 2006 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e330849f-89c2-43aa-b1ff-22426d76c9fa" path="/var/lib/kubelet/pods/e330849f-89c2-43aa-b1ff-22426d76c9fa/volumes" May 13 00:37:22.240087 kubelet[2006]: I0513 00:37:22.240063 2006 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc531689-2d6d-49c7-b50b-848e8affcc20" path="/var/lib/kubelet/pods/fc531689-2d6d-49c7-b50b-848e8affcc20/volumes" May 13 00:37:22.326768 sshd[3596]: pam_unix(sshd:session): session closed for user core May 13 00:37:22.331277 systemd[1]: Started sshd@21-10.0.0.111:22-10.0.0.1:44786.service. May 13 00:37:22.331902 systemd[1]: sshd@20-10.0.0.111:22-10.0.0.1:44778.service: Deactivated successfully. May 13 00:37:22.332576 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:37:22.332765 systemd[1]: session-21.scope: Consumed 1.388s CPU time. May 13 00:37:22.333216 systemd-logind[1203]: Session 21 logged out. Waiting for processes to exit. May 13 00:37:22.334203 systemd-logind[1203]: Removed session 21. May 13 00:37:22.369147 sshd[3763]: Accepted publickey for core from 10.0.0.1 port 44786 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:22.370439 sshd[3763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:22.374094 systemd-logind[1203]: New session 22 of user core. May 13 00:37:22.374939 systemd[1]: Started session-22.scope. May 13 00:37:23.286841 kubelet[2006]: E0513 00:37:23.286803 2006 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:37:23.744370 sshd[3763]: pam_unix(sshd:session): session closed for user core May 13 00:37:23.747475 systemd-logind[1203]: Session 22 logged out. Waiting for processes to exit. May 13 00:37:23.747689 systemd[1]: sshd@21-10.0.0.111:22-10.0.0.1:44786.service: Deactivated successfully. May 13 00:37:23.748330 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:37:23.748489 systemd[1]: session-22.scope: Consumed 1.265s CPU time. May 13 00:37:23.750087 systemd[1]: Started sshd@22-10.0.0.111:22-10.0.0.1:59672.service. May 13 00:37:23.753928 systemd-logind[1203]: Removed session 22. May 13 00:37:23.792702 sshd[3776]: Accepted publickey for core from 10.0.0.1 port 59672 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:23.794002 sshd[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:23.797132 kubelet[2006]: I0513 00:37:23.797097 2006 topology_manager.go:215] "Topology Admit Handler" podUID="0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" podNamespace="kube-system" podName="cilium-7z96t" May 13 00:37:23.797290 kubelet[2006]: E0513 00:37:23.797274 2006 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fc531689-2d6d-49c7-b50b-848e8affcc20" containerName="mount-bpf-fs" May 13 00:37:23.797398 kubelet[2006]: E0513 00:37:23.797386 2006 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e330849f-89c2-43aa-b1ff-22426d76c9fa" containerName="cilium-operator" May 13 00:37:23.797467 kubelet[2006]: E0513 00:37:23.797458 2006 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fc531689-2d6d-49c7-b50b-848e8affcc20" containerName="cilium-agent" May 13 00:37:23.797677 kubelet[2006]: E0513 00:37:23.797659 2006 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fc531689-2d6d-49c7-b50b-848e8affcc20" containerName="mount-cgroup" May 13 00:37:23.797791 systemd-logind[1203]: New session 23 of user core. May 13 00:37:23.798361 kubelet[2006]: E0513 00:37:23.798346 2006 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fc531689-2d6d-49c7-b50b-848e8affcc20" containerName="apply-sysctl-overwrites" May 13 00:37:23.798451 kubelet[2006]: E0513 00:37:23.798440 2006 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fc531689-2d6d-49c7-b50b-848e8affcc20" containerName="clean-cilium-state" May 13 00:37:23.798557 kubelet[2006]: I0513 00:37:23.798543 2006 memory_manager.go:354] "RemoveStaleState removing state" podUID="e330849f-89c2-43aa-b1ff-22426d76c9fa" containerName="cilium-operator" May 13 00:37:23.798592 systemd[1]: Started session-23.scope. May 13 00:37:23.798748 kubelet[2006]: I0513 00:37:23.798736 2006 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc531689-2d6d-49c7-b50b-848e8affcc20" containerName="cilium-agent" May 13 00:37:23.804120 systemd[1]: Created slice kubepods-burstable-pod0bbe8e48_a44f_4bd9_b71d_1dc9d236145c.slice. May 13 00:37:23.924764 sshd[3776]: pam_unix(sshd:session): session closed for user core May 13 00:37:23.927966 systemd[1]: sshd@22-10.0.0.111:22-10.0.0.1:59672.service: Deactivated successfully. May 13 00:37:23.928613 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:37:23.929290 systemd-logind[1203]: Session 23 logged out. Waiting for processes to exit. May 13 00:37:23.931391 systemd[1]: Started sshd@23-10.0.0.111:22-10.0.0.1:59686.service. May 13 00:37:23.933632 systemd-logind[1203]: Removed session 23. May 13 00:37:23.936759 kubelet[2006]: E0513 00:37:23.936719 2006 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-4hhsk lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-7z96t" podUID="0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" May 13 00:37:23.969397 sshd[3789]: Accepted publickey for core from 10.0.0.1 port 59686 ssh2: RSA SHA256:JqaCSrDIbVVQNbxsbpFjz60HxEXsX2X9A6oTs4HqYQk May 13 00:37:23.971007 sshd[3789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:37:23.974773 systemd-logind[1203]: New session 24 of user core. May 13 00:37:23.975274 systemd[1]: Started session-24.scope. May 13 00:37:23.987117 kubelet[2006]: I0513 00:37:23.987079 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-etc-cni-netd\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987225 kubelet[2006]: I0513 00:37:23.987129 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hhsk\" (UniqueName: \"kubernetes.io/projected/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-kube-api-access-4hhsk\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987225 kubelet[2006]: I0513 00:37:23.987148 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-lib-modules\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987225 kubelet[2006]: I0513 00:37:23.987164 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-host-proc-sys-kernel\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987225 kubelet[2006]: I0513 00:37:23.987181 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-cgroup\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987225 kubelet[2006]: I0513 00:37:23.987196 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-clustermesh-secrets\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987367 kubelet[2006]: I0513 00:37:23.987210 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-ipsec-secrets\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987367 kubelet[2006]: I0513 00:37:23.987227 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-host-proc-sys-net\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987367 kubelet[2006]: I0513 00:37:23.987244 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-hubble-tls\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987367 kubelet[2006]: I0513 00:37:23.987270 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-xtables-lock\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987367 kubelet[2006]: I0513 00:37:23.987307 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-run\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987367 kubelet[2006]: I0513 00:37:23.987321 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-bpf-maps\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987554 kubelet[2006]: I0513 00:37:23.987339 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-config-path\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987554 kubelet[2006]: I0513 00:37:23.987355 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cni-path\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:23.987554 kubelet[2006]: I0513 00:37:23.987371 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-hostproc\") pod \"cilium-7z96t\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " pod="kube-system/cilium-7z96t" May 13 00:37:24.591710 kubelet[2006]: I0513 00:37:24.591651 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hhsk\" (UniqueName: \"kubernetes.io/projected/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-kube-api-access-4hhsk\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592443 kubelet[2006]: I0513 00:37:24.592398 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-host-proc-sys-kernel\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592443 kubelet[2006]: I0513 00:37:24.592434 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-host-proc-sys-net\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592568 kubelet[2006]: I0513 00:37:24.592451 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-xtables-lock\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592568 kubelet[2006]: I0513 00:37:24.592469 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-hostproc\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592568 kubelet[2006]: I0513 00:37:24.592484 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-etc-cni-netd\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592568 kubelet[2006]: I0513 00:37:24.592499 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cni-path\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592568 kubelet[2006]: I0513 00:37:24.592493 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:24.592568 kubelet[2006]: I0513 00:37:24.592514 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-lib-modules\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592745 kubelet[2006]: I0513 00:37:24.592543 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-config-path\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592745 kubelet[2006]: I0513 00:37:24.592547 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-hostproc" (OuterVolumeSpecName: "hostproc") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:24.592745 kubelet[2006]: I0513 00:37:24.592563 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-cgroup\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592745 kubelet[2006]: I0513 00:37:24.592569 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:24.592745 kubelet[2006]: I0513 00:37:24.592585 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-clustermesh-secrets\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592855 kubelet[2006]: I0513 00:37:24.592604 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-ipsec-secrets\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592855 kubelet[2006]: I0513 00:37:24.592619 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-bpf-maps\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592855 kubelet[2006]: I0513 00:37:24.592638 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-hubble-tls\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592855 kubelet[2006]: I0513 00:37:24.592652 2006 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-run\") pod \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\" (UID: \"0bbe8e48-a44f-4bd9-b71d-1dc9d236145c\") " May 13 00:37:24.592855 kubelet[2006]: I0513 00:37:24.592686 2006 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.592855 kubelet[2006]: I0513 00:37:24.592709 2006 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.592855 kubelet[2006]: I0513 00:37:24.592717 2006 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.597307 kubelet[2006]: I0513 00:37:24.592586 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cni-path" (OuterVolumeSpecName: "cni-path") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:24.597307 kubelet[2006]: I0513 00:37:24.592598 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:24.597307 kubelet[2006]: I0513 00:37:24.592609 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:24.597307 kubelet[2006]: I0513 00:37:24.592619 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:24.597307 kubelet[2006]: I0513 00:37:24.592747 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:24.595937 systemd[1]: var-lib-kubelet-pods-0bbe8e48\x2da44f\x2d4bd9\x2db71d\x2d1dc9d236145c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4hhsk.mount: Deactivated successfully. May 13 00:37:24.597546 kubelet[2006]: I0513 00:37:24.593740 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:24.597546 kubelet[2006]: I0513 00:37:24.595199 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:37:24.597546 kubelet[2006]: I0513 00:37:24.595206 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-kube-api-access-4hhsk" (OuterVolumeSpecName: "kube-api-access-4hhsk") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "kube-api-access-4hhsk". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:37:24.597546 kubelet[2006]: I0513 00:37:24.595222 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:37:24.596040 systemd[1]: var-lib-kubelet-pods-0bbe8e48\x2da44f\x2d4bd9\x2db71d\x2d1dc9d236145c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:37:24.597872 kubelet[2006]: I0513 00:37:24.596119 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:37:24.598467 kubelet[2006]: I0513 00:37:24.598435 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:37:24.598799 kubelet[2006]: I0513 00:37:24.598753 2006 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" (UID: "0bbe8e48-a44f-4bd9-b71d-1dc9d236145c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:37:24.693449 kubelet[2006]: I0513 00:37:24.693411 2006 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4hhsk\" (UniqueName: \"kubernetes.io/projected/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-kube-api-access-4hhsk\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.693622 kubelet[2006]: I0513 00:37:24.693608 2006 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.693743 kubelet[2006]: I0513 00:37:24.693730 2006 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.693830 kubelet[2006]: I0513 00:37:24.693807 2006 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.693914 kubelet[2006]: I0513 00:37:24.693905 2006 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.693973 kubelet[2006]: I0513 00:37:24.693963 2006 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.694024 kubelet[2006]: I0513 00:37:24.694015 2006 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.694109 kubelet[2006]: I0513 00:37:24.694098 2006 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.694181 kubelet[2006]: I0513 00:37:24.694171 2006 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.694235 kubelet[2006]: I0513 00:37:24.694226 2006 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.694289 kubelet[2006]: I0513 00:37:24.694279 2006 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:37:24.694341 kubelet[2006]: I0513 00:37:24.694332 2006 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:37:25.095187 systemd[1]: var-lib-kubelet-pods-0bbe8e48\x2da44f\x2d4bd9\x2db71d\x2d1dc9d236145c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 00:37:25.095286 systemd[1]: var-lib-kubelet-pods-0bbe8e48\x2da44f\x2d4bd9\x2db71d\x2d1dc9d236145c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:37:25.446590 systemd[1]: Removed slice kubepods-burstable-pod0bbe8e48_a44f_4bd9_b71d_1dc9d236145c.slice. May 13 00:37:25.473708 kubelet[2006]: I0513 00:37:25.473637 2006 topology_manager.go:215] "Topology Admit Handler" podUID="33f397bd-0881-4996-847a-a861e2ada744" podNamespace="kube-system" podName="cilium-hdlrw" May 13 00:37:25.479382 systemd[1]: Created slice kubepods-burstable-pod33f397bd_0881_4996_847a_a861e2ada744.slice. May 13 00:37:25.600316 kubelet[2006]: I0513 00:37:25.600251 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33f397bd-0881-4996-847a-a861e2ada744-host-proc-sys-kernel\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600316 kubelet[2006]: I0513 00:37:25.600300 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33f397bd-0881-4996-847a-a861e2ada744-hubble-tls\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600316 kubelet[2006]: I0513 00:37:25.600318 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33f397bd-0881-4996-847a-a861e2ada744-cilium-run\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600316 kubelet[2006]: I0513 00:37:25.600333 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33f397bd-0881-4996-847a-a861e2ada744-lib-modules\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600806 kubelet[2006]: I0513 00:37:25.600351 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33f397bd-0881-4996-847a-a861e2ada744-xtables-lock\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600806 kubelet[2006]: I0513 00:37:25.600367 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33f397bd-0881-4996-847a-a861e2ada744-host-proc-sys-net\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600806 kubelet[2006]: I0513 00:37:25.600381 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33f397bd-0881-4996-847a-a861e2ada744-hostproc\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600806 kubelet[2006]: I0513 00:37:25.600397 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33f397bd-0881-4996-847a-a861e2ada744-cni-path\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600806 kubelet[2006]: I0513 00:37:25.600412 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33f397bd-0881-4996-847a-a861e2ada744-cilium-config-path\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600806 kubelet[2006]: I0513 00:37:25.600427 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33f397bd-0881-4996-847a-a861e2ada744-clustermesh-secrets\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600969 kubelet[2006]: I0513 00:37:25.600443 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33f397bd-0881-4996-847a-a861e2ada744-bpf-maps\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600969 kubelet[2006]: I0513 00:37:25.600462 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33f397bd-0881-4996-847a-a861e2ada744-cilium-cgroup\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600969 kubelet[2006]: I0513 00:37:25.600478 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lth4l\" (UniqueName: \"kubernetes.io/projected/33f397bd-0881-4996-847a-a861e2ada744-kube-api-access-lth4l\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600969 kubelet[2006]: I0513 00:37:25.600493 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33f397bd-0881-4996-847a-a861e2ada744-etc-cni-netd\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.600969 kubelet[2006]: I0513 00:37:25.600511 2006 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/33f397bd-0881-4996-847a-a861e2ada744-cilium-ipsec-secrets\") pod \"cilium-hdlrw\" (UID: \"33f397bd-0881-4996-847a-a861e2ada744\") " pod="kube-system/cilium-hdlrw" May 13 00:37:25.781501 kubelet[2006]: E0513 00:37:25.781457 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:25.782089 env[1216]: time="2025-05-13T00:37:25.782018940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hdlrw,Uid:33f397bd-0881-4996-847a-a861e2ada744,Namespace:kube-system,Attempt:0,}" May 13 00:37:25.795776 env[1216]: time="2025-05-13T00:37:25.795574793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:37:25.795776 env[1216]: time="2025-05-13T00:37:25.795624874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:37:25.795776 env[1216]: time="2025-05-13T00:37:25.795635394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:37:25.796115 env[1216]: time="2025-05-13T00:37:25.795817315Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6 pid=3819 runtime=io.containerd.runc.v2 May 13 00:37:25.805394 systemd[1]: Started cri-containerd-fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6.scope. May 13 00:37:25.834235 env[1216]: time="2025-05-13T00:37:25.834191900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hdlrw,Uid:33f397bd-0881-4996-847a-a861e2ada744,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6\"" May 13 00:37:25.834860 kubelet[2006]: E0513 00:37:25.834832 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:25.836964 env[1216]: time="2025-05-13T00:37:25.836927678Z" level=info msg="CreateContainer within sandbox \"fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:37:25.846166 env[1216]: time="2025-05-13T00:37:25.846124342Z" level=info msg="CreateContainer within sandbox \"fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"568aa3e9cf4ca08b3ea030641880e1ae4f42bf23fe7ed0cb63628c7aba7edecd\"" May 13 00:37:25.847609 env[1216]: time="2025-05-13T00:37:25.846712346Z" level=info msg="StartContainer for \"568aa3e9cf4ca08b3ea030641880e1ae4f42bf23fe7ed0cb63628c7aba7edecd\"" May 13 00:37:25.860188 systemd[1]: Started cri-containerd-568aa3e9cf4ca08b3ea030641880e1ae4f42bf23fe7ed0cb63628c7aba7edecd.scope. May 13 00:37:25.890731 env[1216]: time="2025-05-13T00:37:25.890673609Z" level=info msg="StartContainer for \"568aa3e9cf4ca08b3ea030641880e1ae4f42bf23fe7ed0cb63628c7aba7edecd\" returns successfully" May 13 00:37:25.899011 systemd[1]: cri-containerd-568aa3e9cf4ca08b3ea030641880e1ae4f42bf23fe7ed0cb63628c7aba7edecd.scope: Deactivated successfully. May 13 00:37:25.924277 env[1216]: time="2025-05-13T00:37:25.924229681Z" level=info msg="shim disconnected" id=568aa3e9cf4ca08b3ea030641880e1ae4f42bf23fe7ed0cb63628c7aba7edecd May 13 00:37:25.924492 env[1216]: time="2025-05-13T00:37:25.924472762Z" level=warning msg="cleaning up after shim disconnected" id=568aa3e9cf4ca08b3ea030641880e1ae4f42bf23fe7ed0cb63628c7aba7edecd namespace=k8s.io May 13 00:37:25.924563 env[1216]: time="2025-05-13T00:37:25.924548923Z" level=info msg="cleaning up dead shim" May 13 00:37:25.931127 env[1216]: time="2025-05-13T00:37:25.931093728Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3902 runtime=io.containerd.runc.v2\n" May 13 00:37:26.239831 kubelet[2006]: I0513 00:37:26.239785 2006 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bbe8e48-a44f-4bd9-b71d-1dc9d236145c" path="/var/lib/kubelet/pods/0bbe8e48-a44f-4bd9-b71d-1dc9d236145c/volumes" May 13 00:37:26.445830 kubelet[2006]: E0513 00:37:26.445800 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:26.448004 env[1216]: time="2025-05-13T00:37:26.447943504Z" level=info msg="CreateContainer within sandbox \"fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:37:26.462119 env[1216]: time="2025-05-13T00:37:26.461733519Z" level=info msg="CreateContainer within sandbox \"fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aa55f4651c40c8c164e3cdd18ea20fc825ae77ffc90d577e4ac6b6bd59a3c539\"" May 13 00:37:26.462389 env[1216]: time="2025-05-13T00:37:26.462347763Z" level=info msg="StartContainer for \"aa55f4651c40c8c164e3cdd18ea20fc825ae77ffc90d577e4ac6b6bd59a3c539\"" May 13 00:37:26.482806 systemd[1]: Started cri-containerd-aa55f4651c40c8c164e3cdd18ea20fc825ae77ffc90d577e4ac6b6bd59a3c539.scope. May 13 00:37:26.512867 env[1216]: time="2025-05-13T00:37:26.512755232Z" level=info msg="StartContainer for \"aa55f4651c40c8c164e3cdd18ea20fc825ae77ffc90d577e4ac6b6bd59a3c539\" returns successfully" May 13 00:37:26.519625 systemd[1]: cri-containerd-aa55f4651c40c8c164e3cdd18ea20fc825ae77ffc90d577e4ac6b6bd59a3c539.scope: Deactivated successfully. May 13 00:37:26.559204 env[1216]: time="2025-05-13T00:37:26.559146313Z" level=info msg="shim disconnected" id=aa55f4651c40c8c164e3cdd18ea20fc825ae77ffc90d577e4ac6b6bd59a3c539 May 13 00:37:26.559204 env[1216]: time="2025-05-13T00:37:26.559194754Z" level=warning msg="cleaning up after shim disconnected" id=aa55f4651c40c8c164e3cdd18ea20fc825ae77ffc90d577e4ac6b6bd59a3c539 namespace=k8s.io May 13 00:37:26.559204 env[1216]: time="2025-05-13T00:37:26.559204434Z" level=info msg="cleaning up dead shim" May 13 00:37:26.566322 env[1216]: time="2025-05-13T00:37:26.566282443Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3965 runtime=io.containerd.runc.v2\n" May 13 00:37:27.095352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa55f4651c40c8c164e3cdd18ea20fc825ae77ffc90d577e4ac6b6bd59a3c539-rootfs.mount: Deactivated successfully. May 13 00:37:27.236801 kubelet[2006]: E0513 00:37:27.236768 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:27.449213 kubelet[2006]: E0513 00:37:27.449073 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:27.451870 env[1216]: time="2025-05-13T00:37:27.451806942Z" level=info msg="CreateContainer within sandbox \"fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:37:27.464709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149251048.mount: Deactivated successfully. May 13 00:37:27.471316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909115140.mount: Deactivated successfully. May 13 00:37:27.477250 env[1216]: time="2025-05-13T00:37:27.477183558Z" level=info msg="CreateContainer within sandbox \"fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f83147503748448f28acd65be907651ba282b86088bda5d2bbbed1f5aaf4aa18\"" May 13 00:37:27.477726 env[1216]: time="2025-05-13T00:37:27.477686761Z" level=info msg="StartContainer for \"f83147503748448f28acd65be907651ba282b86088bda5d2bbbed1f5aaf4aa18\"" May 13 00:37:27.507297 systemd[1]: Started cri-containerd-f83147503748448f28acd65be907651ba282b86088bda5d2bbbed1f5aaf4aa18.scope. May 13 00:37:27.540107 systemd[1]: cri-containerd-f83147503748448f28acd65be907651ba282b86088bda5d2bbbed1f5aaf4aa18.scope: Deactivated successfully. May 13 00:37:27.541328 env[1216]: time="2025-05-13T00:37:27.541285403Z" level=info msg="StartContainer for \"f83147503748448f28acd65be907651ba282b86088bda5d2bbbed1f5aaf4aa18\" returns successfully" May 13 00:37:27.563688 env[1216]: time="2025-05-13T00:37:27.563637838Z" level=info msg="shim disconnected" id=f83147503748448f28acd65be907651ba282b86088bda5d2bbbed1f5aaf4aa18 May 13 00:37:27.563688 env[1216]: time="2025-05-13T00:37:27.563680319Z" level=warning msg="cleaning up after shim disconnected" id=f83147503748448f28acd65be907651ba282b86088bda5d2bbbed1f5aaf4aa18 namespace=k8s.io May 13 00:37:27.563879 env[1216]: time="2025-05-13T00:37:27.563690079Z" level=info msg="cleaning up dead shim" May 13 00:37:27.570012 env[1216]: time="2025-05-13T00:37:27.569966522Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4023 runtime=io.containerd.runc.v2\n" May 13 00:37:28.288008 kubelet[2006]: E0513 00:37:28.287967 2006 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:37:28.452883 kubelet[2006]: E0513 00:37:28.452851 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:28.457776 env[1216]: time="2025-05-13T00:37:28.456730729Z" level=info msg="CreateContainer within sandbox \"fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:37:28.467913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888319793.mount: Deactivated successfully. May 13 00:37:28.472673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount735880456.mount: Deactivated successfully. May 13 00:37:28.475774 env[1216]: time="2025-05-13T00:37:28.475680581Z" level=info msg="CreateContainer within sandbox \"fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e74909bdeebd661ff9a82a8b5b349b9aead1b6506aaf390926ee8c6848044f53\"" May 13 00:37:28.477590 env[1216]: time="2025-05-13T00:37:28.476352786Z" level=info msg="StartContainer for \"e74909bdeebd661ff9a82a8b5b349b9aead1b6506aaf390926ee8c6848044f53\"" May 13 00:37:28.491346 systemd[1]: Started cri-containerd-e74909bdeebd661ff9a82a8b5b349b9aead1b6506aaf390926ee8c6848044f53.scope. May 13 00:37:28.518859 systemd[1]: cri-containerd-e74909bdeebd661ff9a82a8b5b349b9aead1b6506aaf390926ee8c6848044f53.scope: Deactivated successfully. May 13 00:37:28.519978 env[1216]: time="2025-05-13T00:37:28.519849888Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33f397bd_0881_4996_847a_a861e2ada744.slice/cri-containerd-e74909bdeebd661ff9a82a8b5b349b9aead1b6506aaf390926ee8c6848044f53.scope/memory.events\": no such file or directory" May 13 00:37:28.521721 env[1216]: time="2025-05-13T00:37:28.521648781Z" level=info msg="StartContainer for \"e74909bdeebd661ff9a82a8b5b349b9aead1b6506aaf390926ee8c6848044f53\" returns successfully" May 13 00:37:28.540726 env[1216]: time="2025-05-13T00:37:28.540350031Z" level=info msg="shim disconnected" id=e74909bdeebd661ff9a82a8b5b349b9aead1b6506aaf390926ee8c6848044f53 May 13 00:37:28.540726 env[1216]: time="2025-05-13T00:37:28.540390512Z" level=warning msg="cleaning up after shim disconnected" id=e74909bdeebd661ff9a82a8b5b349b9aead1b6506aaf390926ee8c6848044f53 namespace=k8s.io May 13 00:37:28.540726 env[1216]: time="2025-05-13T00:37:28.540399952Z" level=info msg="cleaning up dead shim" May 13 00:37:28.547469 env[1216]: time="2025-05-13T00:37:28.547432761Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:37:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4079 runtime=io.containerd.runc.v2\n" May 13 00:37:29.237460 kubelet[2006]: E0513 00:37:29.237419 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:29.456473 kubelet[2006]: E0513 00:37:29.456440 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:29.458838 env[1216]: time="2025-05-13T00:37:29.458798477Z" level=info msg="CreateContainer within sandbox \"fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:37:29.474528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2136609729.mount: Deactivated successfully. May 13 00:37:29.474980 env[1216]: time="2025-05-13T00:37:29.474776829Z" level=info msg="CreateContainer within sandbox \"fe17b4bb4199e1a2af4bff0a9f5d546cbdf4bd5885595ad64a7dbd3902d535e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"40b3b669278505263dbba2842fa928956a8456456368d0e1bdbc8cfecdc66151\"" May 13 00:37:29.475508 env[1216]: time="2025-05-13T00:37:29.475483034Z" level=info msg="StartContainer for \"40b3b669278505263dbba2842fa928956a8456456368d0e1bdbc8cfecdc66151\"" May 13 00:37:29.489087 systemd[1]: Started cri-containerd-40b3b669278505263dbba2842fa928956a8456456368d0e1bdbc8cfecdc66151.scope. May 13 00:37:29.521736 env[1216]: time="2025-05-13T00:37:29.521681877Z" level=info msg="StartContainer for \"40b3b669278505263dbba2842fa928956a8456456368d0e1bdbc8cfecdc66151\" returns successfully" May 13 00:37:29.774957 kubelet[2006]: I0513 00:37:29.774860 2006 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:37:29Z","lastTransitionTime":"2025-05-13T00:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:37:29.780720 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 13 00:37:30.095547 systemd[1]: run-containerd-runc-k8s.io-40b3b669278505263dbba2842fa928956a8456456368d0e1bdbc8cfecdc66151-runc.xJmSlk.mount: Deactivated successfully. May 13 00:37:30.461314 kubelet[2006]: E0513 00:37:30.460815 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:30.475264 kubelet[2006]: I0513 00:37:30.475067 2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hdlrw" podStartSLOduration=5.475050345 podStartE2EDuration="5.475050345s" podCreationTimestamp="2025-05-13 00:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:37:30.474806344 +0000 UTC m=+82.337924197" watchObservedRunningTime="2025-05-13 00:37:30.475050345 +0000 UTC m=+82.338168198" May 13 00:37:31.782920 kubelet[2006]: E0513 00:37:31.782875 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:32.560096 systemd-networkd[1043]: lxc_health: Link UP May 13 00:37:32.568094 systemd-networkd[1043]: lxc_health: Gained carrier May 13 00:37:32.568722 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:37:33.784080 kubelet[2006]: E0513 00:37:33.784047 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:33.789866 systemd-networkd[1043]: lxc_health: Gained IPv6LL May 13 00:37:34.449826 systemd[1]: run-containerd-runc-k8s.io-40b3b669278505263dbba2842fa928956a8456456368d0e1bdbc8cfecdc66151-runc.Cr4k7N.mount: Deactivated successfully. May 13 00:37:34.467721 kubelet[2006]: E0513 00:37:34.467581 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:35.468981 kubelet[2006]: E0513 00:37:35.468941 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:38.237072 kubelet[2006]: E0513 00:37:38.237034 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:37:38.756976 sshd[3789]: pam_unix(sshd:session): session closed for user core May 13 00:37:38.759769 systemd[1]: sshd@23-10.0.0.111:22-10.0.0.1:59686.service: Deactivated successfully. May 13 00:37:38.760564 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:37:38.761117 systemd-logind[1203]: Session 24 logged out. Waiting for processes to exit. May 13 00:37:38.761762 systemd-logind[1203]: Removed session 24. May 13 00:37:40.237964 kubelet[2006]: E0513 00:37:40.237906 2006 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"