May 14 00:46:07.707740 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 00:46:07.707759 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue May 13 23:17:31 -00 2025 May 14 00:46:07.707767 kernel: efi: EFI v2.70 by EDK II May 14 00:46:07.707776 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 14 00:46:07.707782 kernel: random: crng init done May 14 00:46:07.707787 kernel: ACPI: Early table checksum verification disabled May 14 00:46:07.707793 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 14 00:46:07.707799 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 00:46:07.707805 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:46:07.707810 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:46:07.707815 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:46:07.707820 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:46:07.707826 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:46:07.707831 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:46:07.707839 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:46:07.707845 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:46:07.707851 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:46:07.707857 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 00:46:07.707862 kernel: NUMA: Failed to initialise from firmware May 14 00:46:07.707868 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:46:07.707874 kernel: NUMA: NODE_DATA [mem 0xdcb0c900-0xdcb11fff] May 14 00:46:07.707882 kernel: Zone ranges: May 14 00:46:07.707888 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:46:07.707896 kernel: DMA32 empty May 14 00:46:07.707901 kernel: Normal empty May 14 00:46:07.707907 kernel: Movable zone start for each node May 14 00:46:07.707912 kernel: Early memory node ranges May 14 00:46:07.707918 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 14 00:46:07.707924 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 14 00:46:07.707929 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 14 00:46:07.707935 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 14 00:46:07.707940 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 14 00:46:07.707946 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 14 00:46:07.707951 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 14 00:46:07.707957 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:46:07.707964 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 00:46:07.707970 kernel: psci: probing for conduit method from ACPI. May 14 00:46:07.707975 kernel: psci: PSCIv1.1 detected in firmware. May 14 00:46:07.707981 kernel: psci: Using standard PSCI v0.2 function IDs May 14 00:46:07.707987 kernel: psci: Trusted OS migration not required May 14 00:46:07.707995 kernel: psci: SMC Calling Convention v1.1 May 14 00:46:07.708003 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 00:46:07.708010 kernel: ACPI: SRAT not present May 14 00:46:07.708016 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 14 00:46:07.708022 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 14 00:46:07.708028 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 00:46:07.708034 kernel: Detected PIPT I-cache on CPU0 May 14 00:46:07.708040 kernel: CPU features: detected: GIC system register CPU interface May 14 00:46:07.708047 kernel: CPU features: detected: Hardware dirty bit management May 14 00:46:07.708054 kernel: CPU features: detected: Spectre-v4 May 14 00:46:07.708060 kernel: CPU features: detected: Spectre-BHB May 14 00:46:07.708067 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 00:46:07.708073 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 00:46:07.708079 kernel: CPU features: detected: ARM erratum 1418040 May 14 00:46:07.708085 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 00:46:07.708091 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 00:46:07.708097 kernel: Policy zone: DMA May 14 00:46:07.708104 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:46:07.708111 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:46:07.708117 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:46:07.708123 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:46:07.708129 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:46:07.708141 kernel: Memory: 2457344K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114944K reserved, 0K cma-reserved) May 14 00:46:07.708149 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:46:07.708155 kernel: trace event string verifier disabled May 14 00:46:07.708161 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:46:07.708168 kernel: rcu: RCU event tracing is enabled. May 14 00:46:07.708174 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:46:07.708180 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:46:07.708186 kernel: Tracing variant of Tasks RCU enabled. May 14 00:46:07.708192 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:46:07.708198 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:46:07.708204 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 00:46:07.708212 kernel: GICv3: 256 SPIs implemented May 14 00:46:07.708218 kernel: GICv3: 0 Extended SPIs implemented May 14 00:46:07.708224 kernel: GICv3: Distributor has no Range Selector support May 14 00:46:07.708229 kernel: Root IRQ handler: gic_handle_irq May 14 00:46:07.708235 kernel: GICv3: 16 PPIs implemented May 14 00:46:07.708241 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 00:46:07.708255 kernel: ACPI: SRAT not present May 14 00:46:07.708261 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 00:46:07.708267 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:46:07.708274 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 14 00:46:07.708280 kernel: GICv3: using LPI property table @0x00000000400d0000 May 14 00:46:07.708286 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 14 00:46:07.708293 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:46:07.708299 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 00:46:07.708306 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 00:46:07.708312 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 00:46:07.708318 kernel: arm-pv: using stolen time PV May 14 00:46:07.708324 kernel: Console: colour dummy device 80x25 May 14 00:46:07.708330 kernel: ACPI: Core revision 20210730 May 14 00:46:07.708337 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 00:46:07.708343 kernel: pid_max: default: 32768 minimum: 301 May 14 00:46:07.708349 kernel: LSM: Security Framework initializing May 14 00:46:07.708356 kernel: SELinux: Initializing. May 14 00:46:07.708363 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:46:07.708369 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:46:07.708375 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 00:46:07.708381 kernel: rcu: Hierarchical SRCU implementation. May 14 00:46:07.708387 kernel: Platform MSI: ITS@0x8080000 domain created May 14 00:46:07.708393 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 00:46:07.708399 kernel: Remapping and enabling EFI services. May 14 00:46:07.708406 kernel: smp: Bringing up secondary CPUs ... May 14 00:46:07.708413 kernel: Detected PIPT I-cache on CPU1 May 14 00:46:07.708419 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 00:46:07.708425 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 14 00:46:07.708432 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:46:07.708438 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 00:46:07.708444 kernel: Detected PIPT I-cache on CPU2 May 14 00:46:07.708450 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 00:46:07.708457 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 14 00:46:07.708463 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:46:07.708469 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 00:46:07.708477 kernel: Detected PIPT I-cache on CPU3 May 14 00:46:07.708483 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 00:46:07.708492 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 14 00:46:07.708499 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:46:07.708509 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 00:46:07.708516 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:46:07.708523 kernel: SMP: Total of 4 processors activated. May 14 00:46:07.708531 kernel: CPU features: detected: 32-bit EL0 Support May 14 00:46:07.708539 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 00:46:07.710586 kernel: CPU features: detected: Common not Private translations May 14 00:46:07.710601 kernel: CPU features: detected: CRC32 instructions May 14 00:46:07.710608 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 00:46:07.710618 kernel: CPU features: detected: LSE atomic instructions May 14 00:46:07.710625 kernel: CPU features: detected: Privileged Access Never May 14 00:46:07.710632 kernel: CPU features: detected: RAS Extension Support May 14 00:46:07.710638 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 00:46:07.710645 kernel: CPU: All CPU(s) started at EL1 May 14 00:46:07.710653 kernel: alternatives: patching kernel code May 14 00:46:07.710660 kernel: devtmpfs: initialized May 14 00:46:07.710666 kernel: KASLR enabled May 14 00:46:07.710673 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:46:07.710680 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:46:07.710686 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:46:07.710693 kernel: SMBIOS 3.0.0 present. May 14 00:46:07.710700 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 14 00:46:07.710706 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:46:07.710714 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 00:46:07.710721 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 00:46:07.710728 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 00:46:07.710734 kernel: audit: initializing netlink subsys (disabled) May 14 00:46:07.710741 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 14 00:46:07.710747 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:46:07.710754 kernel: cpuidle: using governor menu May 14 00:46:07.710760 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 00:46:07.710767 kernel: ASID allocator initialised with 32768 entries May 14 00:46:07.710775 kernel: ACPI: bus type PCI registered May 14 00:46:07.710781 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:46:07.710788 kernel: Serial: AMBA PL011 UART driver May 14 00:46:07.710794 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:46:07.710801 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 14 00:46:07.710807 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:46:07.710814 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 14 00:46:07.710820 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:46:07.710827 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 00:46:07.710835 kernel: ACPI: Added _OSI(Module Device) May 14 00:46:07.710841 kernel: ACPI: Added _OSI(Processor Device) May 14 00:46:07.710848 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:46:07.710854 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:46:07.710861 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 14 00:46:07.710867 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 14 00:46:07.710874 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 14 00:46:07.710880 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:46:07.710887 kernel: ACPI: Interpreter enabled May 14 00:46:07.710894 kernel: ACPI: Using GIC for interrupt routing May 14 00:46:07.710901 kernel: ACPI: MCFG table detected, 1 entries May 14 00:46:07.710908 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 00:46:07.710914 kernel: printk: console [ttyAMA0] enabled May 14 00:46:07.710921 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:46:07.711099 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:46:07.711182 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 00:46:07.711266 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 00:46:07.711329 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 00:46:07.711387 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 00:46:07.711395 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 00:46:07.711402 kernel: PCI host bridge to bus 0000:00 May 14 00:46:07.711507 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 00:46:07.711575 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 00:46:07.711639 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 00:46:07.711702 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:46:07.711780 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 00:46:07.711857 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:46:07.711924 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 00:46:07.711984 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 00:46:07.712044 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:46:07.712105 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:46:07.712171 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 00:46:07.712233 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 00:46:07.712303 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 00:46:07.712359 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 00:46:07.712411 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 00:46:07.712420 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 00:46:07.712427 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 00:46:07.712436 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 00:46:07.712443 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 00:46:07.712449 kernel: iommu: Default domain type: Translated May 14 00:46:07.712456 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 00:46:07.712463 kernel: vgaarb: loaded May 14 00:46:07.712469 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 00:46:07.712476 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 00:46:07.712483 kernel: PTP clock support registered May 14 00:46:07.712489 kernel: Registered efivars operations May 14 00:46:07.712497 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 00:46:07.712504 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:46:07.712511 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:46:07.712518 kernel: pnp: PnP ACPI init May 14 00:46:07.712600 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 00:46:07.712609 kernel: pnp: PnP ACPI: found 1 devices May 14 00:46:07.712616 kernel: NET: Registered PF_INET protocol family May 14 00:46:07.712623 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:46:07.712632 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:46:07.712638 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:46:07.712645 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:46:07.712652 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 14 00:46:07.712658 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:46:07.712665 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:46:07.712672 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:46:07.712678 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:46:07.712685 kernel: PCI: CLS 0 bytes, default 64 May 14 00:46:07.712693 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 00:46:07.712700 kernel: kvm [1]: HYP mode not available May 14 00:46:07.712706 kernel: Initialise system trusted keyrings May 14 00:46:07.712713 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:46:07.712719 kernel: Key type asymmetric registered May 14 00:46:07.712726 kernel: Asymmetric key parser 'x509' registered May 14 00:46:07.712732 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 00:46:07.712739 kernel: io scheduler mq-deadline registered May 14 00:46:07.712745 kernel: io scheduler kyber registered May 14 00:46:07.712753 kernel: io scheduler bfq registered May 14 00:46:07.712760 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 00:46:07.712766 kernel: ACPI: button: Power Button [PWRB] May 14 00:46:07.712773 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 00:46:07.712834 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 00:46:07.712843 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:46:07.712849 kernel: thunder_xcv, ver 1.0 May 14 00:46:07.712856 kernel: thunder_bgx, ver 1.0 May 14 00:46:07.712863 kernel: nicpf, ver 1.0 May 14 00:46:07.712870 kernel: nicvf, ver 1.0 May 14 00:46:07.712935 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 00:46:07.712992 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T00:46:07 UTC (1747183567) May 14 00:46:07.713001 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:46:07.713007 kernel: NET: Registered PF_INET6 protocol family May 14 00:46:07.713014 kernel: Segment Routing with IPv6 May 14 00:46:07.713021 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:46:07.713028 kernel: NET: Registered PF_PACKET protocol family May 14 00:46:07.713036 kernel: Key type dns_resolver registered May 14 00:46:07.713042 kernel: registered taskstats version 1 May 14 00:46:07.713049 kernel: Loading compiled-in X.509 certificates May 14 00:46:07.713056 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 7727f4e7680a5b8534f3d5e7bb84b1f695e8c34b' May 14 00:46:07.713062 kernel: Key type .fscrypt registered May 14 00:46:07.713069 kernel: Key type fscrypt-provisioning registered May 14 00:46:07.713076 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:46:07.713082 kernel: ima: Allocated hash algorithm: sha1 May 14 00:46:07.713089 kernel: ima: No architecture policies found May 14 00:46:07.713096 kernel: clk: Disabling unused clocks May 14 00:46:07.713103 kernel: Freeing unused kernel memory: 36480K May 14 00:46:07.713109 kernel: Run /init as init process May 14 00:46:07.713116 kernel: with arguments: May 14 00:46:07.713122 kernel: /init May 14 00:46:07.713129 kernel: with environment: May 14 00:46:07.713135 kernel: HOME=/ May 14 00:46:07.713149 kernel: TERM=linux May 14 00:46:07.713156 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:46:07.713166 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:46:07.713175 systemd[1]: Detected virtualization kvm. May 14 00:46:07.713182 systemd[1]: Detected architecture arm64. May 14 00:46:07.713189 systemd[1]: Running in initrd. May 14 00:46:07.713196 systemd[1]: No hostname configured, using default hostname. May 14 00:46:07.713203 systemd[1]: Hostname set to . May 14 00:46:07.713210 systemd[1]: Initializing machine ID from VM UUID. May 14 00:46:07.713218 systemd[1]: Queued start job for default target initrd.target. May 14 00:46:07.713225 systemd[1]: Started systemd-ask-password-console.path. May 14 00:46:07.713232 systemd[1]: Reached target cryptsetup.target. May 14 00:46:07.713239 systemd[1]: Reached target paths.target. May 14 00:46:07.713254 systemd[1]: Reached target slices.target. May 14 00:46:07.713262 systemd[1]: Reached target swap.target. May 14 00:46:07.713280 systemd[1]: Reached target timers.target. May 14 00:46:07.713288 systemd[1]: Listening on iscsid.socket. May 14 00:46:07.713296 systemd[1]: Listening on iscsiuio.socket. May 14 00:46:07.713304 systemd[1]: Listening on systemd-journald-audit.socket. May 14 00:46:07.713311 systemd[1]: Listening on systemd-journald-dev-log.socket. May 14 00:46:07.713318 systemd[1]: Listening on systemd-journald.socket. May 14 00:46:07.713325 systemd[1]: Listening on systemd-networkd.socket. May 14 00:46:07.713332 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:46:07.713339 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:46:07.713346 systemd[1]: Reached target sockets.target. May 14 00:46:07.713354 systemd[1]: Starting kmod-static-nodes.service... May 14 00:46:07.713361 systemd[1]: Finished network-cleanup.service. May 14 00:46:07.713368 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:46:07.713375 systemd[1]: Starting systemd-journald.service... May 14 00:46:07.713382 systemd[1]: Starting systemd-modules-load.service... May 14 00:46:07.713389 systemd[1]: Starting systemd-resolved.service... May 14 00:46:07.713396 systemd[1]: Starting systemd-vconsole-setup.service... May 14 00:46:07.713403 systemd[1]: Finished kmod-static-nodes.service. May 14 00:46:07.713410 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:46:07.713418 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 14 00:46:07.713425 systemd[1]: Finished systemd-vconsole-setup.service. May 14 00:46:07.713433 kernel: audit: type=1130 audit(1747183567.708:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.713440 systemd[1]: Starting dracut-cmdline-ask.service... May 14 00:46:07.713451 systemd-journald[290]: Journal started May 14 00:46:07.713495 systemd-journald[290]: Runtime Journal (/run/log/journal/d84947feae264bc6ab0a9693c276cef7) is 6.0M, max 48.7M, 42.6M free. May 14 00:46:07.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.704038 systemd-modules-load[291]: Inserted module 'overlay' May 14 00:46:07.715738 systemd[1]: Started systemd-journald.service. May 14 00:46:07.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.716654 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 14 00:46:07.719530 kernel: audit: type=1130 audit(1747183567.716:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.722685 kernel: audit: type=1130 audit(1747183567.719:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.722464 systemd-resolved[292]: Positive Trust Anchors: May 14 00:46:07.722471 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:46:07.722499 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:46:07.726801 systemd-resolved[292]: Defaulting to hostname 'linux'. May 14 00:46:07.730549 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:46:07.728353 systemd[1]: Started systemd-resolved.service. May 14 00:46:07.733293 kernel: audit: type=1130 audit(1747183567.729:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.730081 systemd[1]: Reached target nss-lookup.target. May 14 00:46:07.734422 kernel: Bridge firewalling registered May 14 00:46:07.734236 systemd-modules-load[291]: Inserted module 'br_netfilter' May 14 00:46:07.739130 systemd[1]: Finished dracut-cmdline-ask.service. May 14 00:46:07.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.740629 systemd[1]: Starting dracut-cmdline.service... May 14 00:46:07.743342 kernel: audit: type=1130 audit(1747183567.739:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.747263 kernel: SCSI subsystem initialized May 14 00:46:07.749829 dracut-cmdline[310]: dracut-dracut-053 May 14 00:46:07.752071 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:46:07.756301 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:46:07.756319 kernel: device-mapper: uevent: version 1.0.3 May 14 00:46:07.757424 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 14 00:46:07.759700 systemd-modules-load[291]: Inserted module 'dm_multipath' May 14 00:46:07.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.760540 systemd[1]: Finished systemd-modules-load.service. May 14 00:46:07.764341 kernel: audit: type=1130 audit(1747183567.760:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.761927 systemd[1]: Starting systemd-sysctl.service... May 14 00:46:07.769849 systemd[1]: Finished systemd-sysctl.service. May 14 00:46:07.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.773291 kernel: audit: type=1130 audit(1747183567.770:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.809276 kernel: Loading iSCSI transport class v2.0-870. May 14 00:46:07.824266 kernel: iscsi: registered transport (tcp) May 14 00:46:07.839268 kernel: iscsi: registered transport (qla4xxx) May 14 00:46:07.839280 kernel: QLogic iSCSI HBA Driver May 14 00:46:07.872661 systemd[1]: Finished dracut-cmdline.service. May 14 00:46:07.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.874080 systemd[1]: Starting dracut-pre-udev.service... May 14 00:46:07.876357 kernel: audit: type=1130 audit(1747183567.872:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:07.920270 kernel: raid6: neonx8 gen() 13743 MB/s May 14 00:46:07.937261 kernel: raid6: neonx8 xor() 10767 MB/s May 14 00:46:07.954268 kernel: raid6: neonx4 gen() 13514 MB/s May 14 00:46:07.971327 kernel: raid6: neonx4 xor() 11149 MB/s May 14 00:46:07.988260 kernel: raid6: neonx2 gen() 12741 MB/s May 14 00:46:08.005261 kernel: raid6: neonx2 xor() 10317 MB/s May 14 00:46:08.022261 kernel: raid6: neonx1 gen() 10545 MB/s May 14 00:46:08.039265 kernel: raid6: neonx1 xor() 8753 MB/s May 14 00:46:08.056265 kernel: raid6: int64x8 gen() 6259 MB/s May 14 00:46:08.073261 kernel: raid6: int64x8 xor() 3541 MB/s May 14 00:46:08.090272 kernel: raid6: int64x4 gen() 7198 MB/s May 14 00:46:08.107270 kernel: raid6: int64x4 xor() 3847 MB/s May 14 00:46:08.124263 kernel: raid6: int64x2 gen() 6145 MB/s May 14 00:46:08.141261 kernel: raid6: int64x2 xor() 3318 MB/s May 14 00:46:08.158262 kernel: raid6: int64x1 gen() 5043 MB/s May 14 00:46:08.175677 kernel: raid6: int64x1 xor() 2642 MB/s May 14 00:46:08.175692 kernel: raid6: using algorithm neonx8 gen() 13743 MB/s May 14 00:46:08.175701 kernel: raid6: .... xor() 10767 MB/s, rmw enabled May 14 00:46:08.175709 kernel: raid6: using neon recovery algorithm May 14 00:46:08.186639 kernel: xor: measuring software checksum speed May 14 00:46:08.186651 kernel: 8regs : 16845 MB/sec May 14 00:46:08.187261 kernel: 32regs : 20728 MB/sec May 14 00:46:08.188265 kernel: arm64_neon : 26063 MB/sec May 14 00:46:08.188276 kernel: xor: using function: arm64_neon (26063 MB/sec) May 14 00:46:08.242266 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 14 00:46:08.252291 systemd[1]: Finished dracut-pre-udev.service. May 14 00:46:08.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:08.255000 audit: BPF prog-id=7 op=LOAD May 14 00:46:08.255000 audit: BPF prog-id=8 op=LOAD May 14 00:46:08.255257 kernel: audit: type=1130 audit(1747183568.252:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:08.255647 systemd[1]: Starting systemd-udevd.service... May 14 00:46:08.269467 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 14 00:46:08.272736 systemd[1]: Started systemd-udevd.service. May 14 00:46:08.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:08.274078 systemd[1]: Starting dracut-pre-trigger.service... May 14 00:46:08.285477 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation May 14 00:46:08.311997 systemd[1]: Finished dracut-pre-trigger.service. May 14 00:46:08.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:08.313462 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:46:08.346828 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:46:08.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:08.376221 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:46:08.381900 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:46:08.381915 kernel: GPT:9289727 != 19775487 May 14 00:46:08.381929 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:46:08.381939 kernel: GPT:9289727 != 19775487 May 14 00:46:08.381947 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:46:08.381960 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:46:08.400270 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (544) May 14 00:46:08.401734 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 14 00:46:08.402564 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 14 00:46:08.410237 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 14 00:46:08.414835 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:46:08.416356 systemd[1]: Starting disk-uuid.service... May 14 00:46:08.420032 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 14 00:46:08.422259 disk-uuid[562]: Primary Header is updated. May 14 00:46:08.422259 disk-uuid[562]: Secondary Entries is updated. May 14 00:46:08.422259 disk-uuid[562]: Secondary Header is updated. May 14 00:46:08.425256 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:46:09.437999 disk-uuid[563]: The operation has completed successfully. May 14 00:46:09.439052 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:46:09.460812 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:46:09.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.460914 systemd[1]: Finished disk-uuid.service. May 14 00:46:09.462537 systemd[1]: Starting verity-setup.service... May 14 00:46:09.478262 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 00:46:09.498241 systemd[1]: Found device dev-mapper-usr.device. May 14 00:46:09.499725 systemd[1]: Mounting sysusr-usr.mount... May 14 00:46:09.500524 systemd[1]: Finished verity-setup.service. May 14 00:46:09.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.548276 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 14 00:46:09.548643 systemd[1]: Mounted sysusr-usr.mount. May 14 00:46:09.549498 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 14 00:46:09.550127 systemd[1]: Starting ignition-setup.service... May 14 00:46:09.551988 systemd[1]: Starting parse-ip-for-networkd.service... May 14 00:46:09.559509 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:46:09.559546 kernel: BTRFS info (device vda6): using free space tree May 14 00:46:09.559556 kernel: BTRFS info (device vda6): has skinny extents May 14 00:46:09.567885 systemd[1]: mnt-oem.mount: Deactivated successfully. May 14 00:46:09.573577 systemd[1]: Finished ignition-setup.service. May 14 00:46:09.575004 systemd[1]: Starting ignition-fetch-offline.service... May 14 00:46:09.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.632491 systemd[1]: Finished parse-ip-for-networkd.service. May 14 00:46:09.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.635000 audit: BPF prog-id=9 op=LOAD May 14 00:46:09.635940 systemd[1]: Starting systemd-networkd.service... May 14 00:46:09.645183 ignition[644]: Ignition 2.14.0 May 14 00:46:09.645196 ignition[644]: Stage: fetch-offline May 14 00:46:09.645232 ignition[644]: no configs at "/usr/lib/ignition/base.d" May 14 00:46:09.645241 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:46:09.645388 ignition[644]: parsed url from cmdline: "" May 14 00:46:09.645392 ignition[644]: no config URL provided May 14 00:46:09.645396 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:46:09.645404 ignition[644]: no config at "/usr/lib/ignition/user.ign" May 14 00:46:09.645421 ignition[644]: op(1): [started] loading QEMU firmware config module May 14 00:46:09.645425 ignition[644]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:46:09.658296 ignition[644]: op(1): [finished] loading QEMU firmware config module May 14 00:46:09.664091 systemd-networkd[738]: lo: Link UP May 14 00:46:09.664105 systemd-networkd[738]: lo: Gained carrier May 14 00:46:09.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.664499 systemd-networkd[738]: Enumeration completed May 14 00:46:09.664582 systemd[1]: Started systemd-networkd.service. May 14 00:46:09.664665 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:46:09.665579 systemd-networkd[738]: eth0: Link UP May 14 00:46:09.665582 systemd-networkd[738]: eth0: Gained carrier May 14 00:46:09.666465 systemd[1]: Reached target network.target. May 14 00:46:09.668555 systemd[1]: Starting iscsiuio.service... May 14 00:46:09.679653 systemd[1]: Started iscsiuio.service. May 14 00:46:09.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.681237 systemd[1]: Starting iscsid.service... May 14 00:46:09.684467 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 14 00:46:09.684467 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 14 00:46:09.684467 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 14 00:46:09.684467 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. May 14 00:46:09.684467 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 14 00:46:09.684467 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 14 00:46:09.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.687252 systemd[1]: Started iscsid.service. May 14 00:46:09.691393 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:46:09.693790 systemd[1]: Starting dracut-initqueue.service... May 14 00:46:09.703735 systemd[1]: Finished dracut-initqueue.service. May 14 00:46:09.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.704741 systemd[1]: Reached target remote-fs-pre.target. May 14 00:46:09.706319 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:46:09.707961 systemd[1]: Reached target remote-fs.target. May 14 00:46:09.710226 systemd[1]: Starting dracut-pre-mount.service... May 14 00:46:09.717510 systemd[1]: Finished dracut-pre-mount.service. May 14 00:46:09.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.725273 ignition[644]: parsing config with SHA512: 81d073c6bcbd1cc1effc6d91ba738302d0fdfd0debf86cc87ef2d6e9d743db3daedb9fc2413cf55211c359543d73dff4305a4390047b15c562bbe5911438282b May 14 00:46:09.732535 unknown[644]: fetched base config from "system" May 14 00:46:09.733069 ignition[644]: fetch-offline: fetch-offline passed May 14 00:46:09.732543 unknown[644]: fetched user config from "qemu" May 14 00:46:09.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.733122 ignition[644]: Ignition finished successfully May 14 00:46:09.734777 systemd[1]: Finished ignition-fetch-offline.service. May 14 00:46:09.736330 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:46:09.736981 systemd[1]: Starting ignition-kargs.service... May 14 00:46:09.745545 ignition[759]: Ignition 2.14.0 May 14 00:46:09.745555 ignition[759]: Stage: kargs May 14 00:46:09.745655 ignition[759]: no configs at "/usr/lib/ignition/base.d" May 14 00:46:09.747806 systemd[1]: Finished ignition-kargs.service. May 14 00:46:09.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.745666 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:46:09.746532 ignition[759]: kargs: kargs passed May 14 00:46:09.750015 systemd[1]: Starting ignition-disks.service... May 14 00:46:09.746584 ignition[759]: Ignition finished successfully May 14 00:46:09.756855 ignition[765]: Ignition 2.14.0 May 14 00:46:09.756866 ignition[765]: Stage: disks May 14 00:46:09.756963 ignition[765]: no configs at "/usr/lib/ignition/base.d" May 14 00:46:09.758878 systemd[1]: Finished ignition-disks.service. May 14 00:46:09.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.756974 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:46:09.760403 systemd[1]: Reached target initrd-root-device.target. May 14 00:46:09.757894 ignition[765]: disks: disks passed May 14 00:46:09.761745 systemd[1]: Reached target local-fs-pre.target. May 14 00:46:09.757942 ignition[765]: Ignition finished successfully May 14 00:46:09.763376 systemd[1]: Reached target local-fs.target. May 14 00:46:09.764730 systemd[1]: Reached target sysinit.target. May 14 00:46:09.765918 systemd[1]: Reached target basic.target. May 14 00:46:09.768097 systemd[1]: Starting systemd-fsck-root.service... May 14 00:46:09.787759 systemd-fsck[773]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 14 00:46:09.793828 systemd[1]: Finished systemd-fsck-root.service. May 14 00:46:09.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.795515 systemd[1]: Mounting sysroot.mount... May 14 00:46:09.803132 systemd[1]: Mounted sysroot.mount. May 14 00:46:09.804406 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 14 00:46:09.803938 systemd[1]: Reached target initrd-root-fs.target. May 14 00:46:09.807968 systemd[1]: Mounting sysroot-usr.mount... May 14 00:46:09.808854 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 14 00:46:09.808896 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:46:09.808921 systemd[1]: Reached target ignition-diskful.target. May 14 00:46:09.810955 systemd[1]: Mounted sysroot-usr.mount. May 14 00:46:09.812311 systemd[1]: Starting initrd-setup-root.service... May 14 00:46:09.816736 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:46:09.821404 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory May 14 00:46:09.825458 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:46:09.829060 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:46:09.853842 systemd[1]: Finished initrd-setup-root.service. May 14 00:46:09.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.855205 systemd[1]: Starting ignition-mount.service... May 14 00:46:09.856399 systemd[1]: Starting sysroot-boot.service... May 14 00:46:09.860681 bash[824]: umount: /sysroot/usr/share/oem: not mounted. May 14 00:46:09.869470 ignition[826]: INFO : Ignition 2.14.0 May 14 00:46:09.869470 ignition[826]: INFO : Stage: mount May 14 00:46:09.871517 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:46:09.871517 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:46:09.871517 ignition[826]: INFO : mount: mount passed May 14 00:46:09.871517 ignition[826]: INFO : Ignition finished successfully May 14 00:46:09.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:09.872466 systemd[1]: Finished ignition-mount.service. May 14 00:46:09.876379 systemd[1]: Finished sysroot-boot.service. May 14 00:46:09.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:10.507473 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 14 00:46:10.513940 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (835) May 14 00:46:10.513968 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:46:10.514669 kernel: BTRFS info (device vda6): using free space tree May 14 00:46:10.514680 kernel: BTRFS info (device vda6): has skinny extents May 14 00:46:10.517859 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 14 00:46:10.519374 systemd[1]: Starting ignition-files.service... May 14 00:46:10.532556 ignition[855]: INFO : Ignition 2.14.0 May 14 00:46:10.532556 ignition[855]: INFO : Stage: files May 14 00:46:10.533850 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:46:10.533850 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:46:10.533850 ignition[855]: DEBUG : files: compiled without relabeling support, skipping May 14 00:46:10.536282 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:46:10.536282 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:46:10.541515 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:46:10.542527 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:46:10.542527 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:46:10.542183 unknown[855]: wrote ssh authorized keys file for user: core May 14 00:46:10.545354 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:46:10.545354 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 00:46:10.663926 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:46:10.761472 systemd-networkd[738]: eth0: Gained IPv6LL May 14 00:46:11.043767 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:46:11.045827 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:46:11.045827 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 00:46:11.506564 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 00:46:11.598639 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 00:46:11.600632 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 00:46:11.826482 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 00:46:12.191480 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 00:46:12.191480 ignition[855]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 00:46:12.195319 ignition[855]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:46:12.195319 ignition[855]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:46:12.195319 ignition[855]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 00:46:12.195319 ignition[855]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 00:46:12.195319 ignition[855]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:46:12.195319 ignition[855]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:46:12.195319 ignition[855]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 00:46:12.195319 ignition[855]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 14 00:46:12.195319 ignition[855]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:46:12.195319 ignition[855]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:46:12.195319 ignition[855]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:46:12.223087 ignition[855]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:46:12.226300 ignition[855]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:46:12.226300 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:46:12.226300 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:46:12.226300 ignition[855]: INFO : files: files passed May 14 00:46:12.226300 ignition[855]: INFO : Ignition finished successfully May 14 00:46:12.234980 kernel: kauditd_printk_skb: 23 callbacks suppressed May 14 00:46:12.235007 kernel: audit: type=1130 audit(1747183572.226:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.225784 systemd[1]: Finished ignition-files.service. May 14 00:46:12.227474 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 14 00:46:12.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.230782 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 14 00:46:12.244346 kernel: audit: type=1130 audit(1747183572.236:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.244364 kernel: audit: type=1131 audit(1747183572.236:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.244374 kernel: audit: type=1130 audit(1747183572.241:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.244471 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 14 00:46:12.231488 systemd[1]: Starting ignition-quench.service... May 14 00:46:12.247213 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:46:12.235416 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:46:12.235490 systemd[1]: Finished ignition-quench.service. May 14 00:46:12.236883 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 14 00:46:12.241445 systemd[1]: Reached target ignition-complete.target. May 14 00:46:12.245723 systemd[1]: Starting initrd-parse-etc.service... May 14 00:46:12.257668 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:46:12.257750 systemd[1]: Finished initrd-parse-etc.service. May 14 00:46:12.263164 kernel: audit: type=1130 audit(1747183572.258:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.263182 kernel: audit: type=1131 audit(1747183572.258:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.259174 systemd[1]: Reached target initrd-fs.target. May 14 00:46:12.264356 systemd[1]: Reached target initrd.target. May 14 00:46:12.265515 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 14 00:46:12.266179 systemd[1]: Starting dracut-pre-pivot.service... May 14 00:46:12.276300 systemd[1]: Finished dracut-pre-pivot.service. May 14 00:46:12.279310 kernel: audit: type=1130 audit(1747183572.276:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.277742 systemd[1]: Starting initrd-cleanup.service... May 14 00:46:12.285142 systemd[1]: Stopped target nss-lookup.target. May 14 00:46:12.286007 systemd[1]: Stopped target remote-cryptsetup.target. May 14 00:46:12.287282 systemd[1]: Stopped target timers.target. May 14 00:46:12.288463 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:46:12.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.288561 systemd[1]: Stopped dracut-pre-pivot.service. May 14 00:46:12.292909 kernel: audit: type=1131 audit(1747183572.289:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.289704 systemd[1]: Stopped target initrd.target. May 14 00:46:12.292544 systemd[1]: Stopped target basic.target. May 14 00:46:12.293640 systemd[1]: Stopped target ignition-complete.target. May 14 00:46:12.294830 systemd[1]: Stopped target ignition-diskful.target. May 14 00:46:12.296011 systemd[1]: Stopped target initrd-root-device.target. May 14 00:46:12.297295 systemd[1]: Stopped target remote-fs.target. May 14 00:46:12.298492 systemd[1]: Stopped target remote-fs-pre.target. May 14 00:46:12.299765 systemd[1]: Stopped target sysinit.target. May 14 00:46:12.300992 systemd[1]: Stopped target local-fs.target. May 14 00:46:12.302107 systemd[1]: Stopped target local-fs-pre.target. May 14 00:46:12.303278 systemd[1]: Stopped target swap.target. May 14 00:46:12.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.304319 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:46:12.308883 kernel: audit: type=1131 audit(1747183572.305:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.304423 systemd[1]: Stopped dracut-pre-mount.service. May 14 00:46:12.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.305612 systemd[1]: Stopped target cryptsetup.target. May 14 00:46:12.313008 kernel: audit: type=1131 audit(1747183572.309:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.308341 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:46:12.308442 systemd[1]: Stopped dracut-initqueue.service. May 14 00:46:12.309735 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:46:12.309832 systemd[1]: Stopped ignition-fetch-offline.service. May 14 00:46:12.312709 systemd[1]: Stopped target paths.target. May 14 00:46:12.313722 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:46:12.316285 systemd[1]: Stopped systemd-ask-password-console.path. May 14 00:46:12.317571 systemd[1]: Stopped target slices.target. May 14 00:46:12.318867 systemd[1]: Stopped target sockets.target. May 14 00:46:12.320121 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:46:12.320198 systemd[1]: Closed iscsid.socket. May 14 00:46:12.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.321136 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:46:12.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.321207 systemd[1]: Closed iscsiuio.socket. May 14 00:46:12.322166 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:46:12.322295 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 14 00:46:12.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.323552 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:46:12.323642 systemd[1]: Stopped ignition-files.service. May 14 00:46:12.325365 systemd[1]: Stopping ignition-mount.service... May 14 00:46:12.326149 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:46:12.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.332638 ignition[895]: INFO : Ignition 2.14.0 May 14 00:46:12.332638 ignition[895]: INFO : Stage: umount May 14 00:46:12.332638 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:46:12.332638 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:46:12.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.326309 systemd[1]: Stopped kmod-static-nodes.service. May 14 00:46:12.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.337773 ignition[895]: INFO : umount: umount passed May 14 00:46:12.337773 ignition[895]: INFO : Ignition finished successfully May 14 00:46:12.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.328423 systemd[1]: Stopping sysroot-boot.service... May 14 00:46:12.329601 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:46:12.329730 systemd[1]: Stopped systemd-udev-trigger.service. May 14 00:46:12.330926 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:46:12.331026 systemd[1]: Stopped dracut-pre-trigger.service. May 14 00:46:12.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.333575 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:46:12.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.333655 systemd[1]: Stopped ignition-mount.service. May 14 00:46:12.334820 systemd[1]: Stopped target network.target. May 14 00:46:12.335842 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:46:12.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.335889 systemd[1]: Stopped ignition-disks.service. May 14 00:46:12.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.337301 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:46:12.337338 systemd[1]: Stopped ignition-kargs.service. May 14 00:46:12.338446 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:46:12.338482 systemd[1]: Stopped ignition-setup.service. May 14 00:46:12.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.339875 systemd[1]: Stopping systemd-networkd.service... May 14 00:46:12.341310 systemd[1]: Stopping systemd-resolved.service... May 14 00:46:12.365000 audit: BPF prog-id=6 op=UNLOAD May 14 00:46:12.343387 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:46:12.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.343855 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:46:12.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.343931 systemd[1]: Finished initrd-cleanup.service. May 14 00:46:12.344767 systemd-networkd[738]: eth0: DHCPv6 lease lost May 14 00:46:12.370000 audit: BPF prog-id=9 op=UNLOAD May 14 00:46:12.346446 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:46:12.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.346532 systemd[1]: Stopped systemd-networkd.service. May 14 00:46:12.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.348171 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:46:12.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.348213 systemd[1]: Closed systemd-networkd.socket. May 14 00:46:12.350150 systemd[1]: Stopping network-cleanup.service... May 14 00:46:12.350915 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:46:12.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.350970 systemd[1]: Stopped parse-ip-for-networkd.service. May 14 00:46:12.352410 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:46:12.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.352452 systemd[1]: Stopped systemd-sysctl.service. May 14 00:46:12.354523 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:46:12.354563 systemd[1]: Stopped systemd-modules-load.service. May 14 00:46:12.356437 systemd[1]: Stopping systemd-udevd.service... May 14 00:46:12.361570 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:46:12.362001 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:46:12.362091 systemd[1]: Stopped systemd-resolved.service. May 14 00:46:12.365199 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:46:12.365408 systemd[1]: Stopped systemd-udevd.service. May 14 00:46:12.366606 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:46:12.366687 systemd[1]: Stopped network-cleanup.service. May 14 00:46:12.368584 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:46:12.368616 systemd[1]: Closed systemd-udevd-control.socket. May 14 00:46:12.370050 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:46:12.370080 systemd[1]: Closed systemd-udevd-kernel.socket. May 14 00:46:12.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.372040 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:46:12.372083 systemd[1]: Stopped dracut-pre-udev.service. May 14 00:46:12.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.373353 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:46:12.373389 systemd[1]: Stopped dracut-cmdline.service. May 14 00:46:12.374654 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:46:12.374691 systemd[1]: Stopped dracut-cmdline-ask.service. May 14 00:46:12.377347 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 14 00:46:12.379743 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:46:12.379801 systemd[1]: Stopped systemd-vconsole-setup.service. May 14 00:46:12.382579 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:46:12.382657 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 14 00:46:12.393766 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:46:12.393853 systemd[1]: Stopped sysroot-boot.service. May 14 00:46:12.394876 systemd[1]: Reached target initrd-switch-root.target. May 14 00:46:12.396039 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:46:12.396085 systemd[1]: Stopped initrd-setup-root.service. May 14 00:46:12.398164 systemd[1]: Starting initrd-switch-root.service... May 14 00:46:12.404347 systemd[1]: Switching root. May 14 00:46:12.425902 iscsid[745]: iscsid shutting down. May 14 00:46:12.426432 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). May 14 00:46:12.426474 systemd-journald[290]: Journal stopped May 14 00:46:14.438574 kernel: SELinux: Class mctp_socket not defined in policy. May 14 00:46:14.438629 kernel: SELinux: Class anon_inode not defined in policy. May 14 00:46:14.438641 kernel: SELinux: the above unknown classes and permissions will be allowed May 14 00:46:14.438653 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:46:14.438663 kernel: SELinux: policy capability open_perms=1 May 14 00:46:14.438681 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:46:14.438691 kernel: SELinux: policy capability always_check_network=0 May 14 00:46:14.438700 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:46:14.438710 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:46:14.438720 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:46:14.438735 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:46:14.438747 systemd[1]: Successfully loaded SELinux policy in 33.944ms. May 14 00:46:14.438773 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.976ms. May 14 00:46:14.438785 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:46:14.438796 systemd[1]: Detected virtualization kvm. May 14 00:46:14.438807 systemd[1]: Detected architecture arm64. May 14 00:46:14.438817 systemd[1]: Detected first boot. May 14 00:46:14.438827 systemd[1]: Initializing machine ID from VM UUID. May 14 00:46:14.438837 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 14 00:46:14.438847 systemd[1]: Populated /etc with preset unit settings. May 14 00:46:14.438859 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:46:14.438874 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:46:14.438886 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:46:14.438899 systemd[1]: iscsiuio.service: Deactivated successfully. May 14 00:46:14.438909 systemd[1]: Stopped iscsiuio.service. May 14 00:46:14.438919 systemd[1]: iscsid.service: Deactivated successfully. May 14 00:46:14.438931 systemd[1]: Stopped iscsid.service. May 14 00:46:14.438943 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:46:14.438953 systemd[1]: Stopped initrd-switch-root.service. May 14 00:46:14.438963 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:46:14.438974 systemd[1]: Created slice system-addon\x2dconfig.slice. May 14 00:46:14.438985 systemd[1]: Created slice system-addon\x2drun.slice. May 14 00:46:14.438996 systemd[1]: Created slice system-getty.slice. May 14 00:46:14.439006 systemd[1]: Created slice system-modprobe.slice. May 14 00:46:14.439017 systemd[1]: Created slice system-serial\x2dgetty.slice. May 14 00:46:14.439027 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 14 00:46:14.439038 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 14 00:46:14.439048 systemd[1]: Created slice user.slice. May 14 00:46:14.439058 systemd[1]: Started systemd-ask-password-console.path. May 14 00:46:14.439068 systemd[1]: Started systemd-ask-password-wall.path. May 14 00:46:14.439079 systemd[1]: Set up automount boot.automount. May 14 00:46:14.439091 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 14 00:46:14.439101 systemd[1]: Stopped target initrd-switch-root.target. May 14 00:46:14.439112 systemd[1]: Stopped target initrd-fs.target. May 14 00:46:14.439123 systemd[1]: Stopped target initrd-root-fs.target. May 14 00:46:14.439138 systemd[1]: Reached target integritysetup.target. May 14 00:46:14.439148 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:46:14.439159 systemd[1]: Reached target remote-fs.target. May 14 00:46:14.439169 systemd[1]: Reached target slices.target. May 14 00:46:14.439181 systemd[1]: Reached target swap.target. May 14 00:46:14.439192 systemd[1]: Reached target torcx.target. May 14 00:46:14.439202 systemd[1]: Reached target veritysetup.target. May 14 00:46:14.439217 systemd[1]: Listening on systemd-coredump.socket. May 14 00:46:14.439231 systemd[1]: Listening on systemd-initctl.socket. May 14 00:46:14.439241 systemd[1]: Listening on systemd-networkd.socket. May 14 00:46:14.439260 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:46:14.439271 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:46:14.439282 systemd[1]: Listening on systemd-userdbd.socket. May 14 00:46:14.439294 systemd[1]: Mounting dev-hugepages.mount... May 14 00:46:14.439305 systemd[1]: Mounting dev-mqueue.mount... May 14 00:46:14.439315 systemd[1]: Mounting media.mount... May 14 00:46:14.439326 systemd[1]: Mounting sys-kernel-debug.mount... May 14 00:46:14.439336 systemd[1]: Mounting sys-kernel-tracing.mount... May 14 00:46:14.439346 systemd[1]: Mounting tmp.mount... May 14 00:46:14.439357 systemd[1]: Starting flatcar-tmpfiles.service... May 14 00:46:14.439369 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:46:14.439379 systemd[1]: Starting kmod-static-nodes.service... May 14 00:46:14.439391 systemd[1]: Starting modprobe@configfs.service... May 14 00:46:14.439401 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:46:14.439412 systemd[1]: Starting modprobe@drm.service... May 14 00:46:14.439422 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:46:14.439433 systemd[1]: Starting modprobe@fuse.service... May 14 00:46:14.439443 systemd[1]: Starting modprobe@loop.service... May 14 00:46:14.439454 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:46:14.439464 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:46:14.439475 systemd[1]: Stopped systemd-fsck-root.service. May 14 00:46:14.439486 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:46:14.439496 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:46:14.439506 systemd[1]: Stopped systemd-journald.service. May 14 00:46:14.439517 kernel: fuse: init (API version 7.34) May 14 00:46:14.439527 systemd[1]: Starting systemd-journald.service... May 14 00:46:14.439538 systemd[1]: Starting systemd-modules-load.service... May 14 00:46:14.439547 kernel: loop: module loaded May 14 00:46:14.439558 systemd[1]: Starting systemd-network-generator.service... May 14 00:46:14.439568 systemd[1]: Starting systemd-remount-fs.service... May 14 00:46:14.439580 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:46:14.439592 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:46:14.439602 systemd[1]: Stopped verity-setup.service. May 14 00:46:14.439612 systemd[1]: Mounted dev-hugepages.mount. May 14 00:46:14.439623 systemd[1]: Mounted dev-mqueue.mount. May 14 00:46:14.439633 systemd[1]: Mounted media.mount. May 14 00:46:14.439643 systemd[1]: Mounted sys-kernel-debug.mount. May 14 00:46:14.439654 systemd[1]: Mounted sys-kernel-tracing.mount. May 14 00:46:14.439664 systemd[1]: Mounted tmp.mount. May 14 00:46:14.439675 systemd[1]: Finished kmod-static-nodes.service. May 14 00:46:14.439688 systemd-journald[997]: Journal started May 14 00:46:14.439728 systemd-journald[997]: Runtime Journal (/run/log/journal/d84947feae264bc6ab0a9693c276cef7) is 6.0M, max 48.7M, 42.6M free. May 14 00:46:12.498000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:46:12.604000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:46:12.604000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:46:12.605000 audit: BPF prog-id=10 op=LOAD May 14 00:46:12.605000 audit: BPF prog-id=10 op=UNLOAD May 14 00:46:12.605000 audit: BPF prog-id=11 op=LOAD May 14 00:46:12.605000 audit: BPF prog-id=11 op=UNLOAD May 14 00:46:12.644000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 14 00:46:12.644000 audit[928]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001858d2 a1=4000028e40 a2=4000027100 a3=32 items=0 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:46:12.644000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:46:12.645000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 14 00:46:12.645000 audit[928]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000980d5 a2=1ed a3=0 items=2 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:46:12.645000 audit: CWD cwd="/" May 14 00:46:12.645000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:46:12.645000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:46:12.645000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:46:14.315000 audit: BPF prog-id=12 op=LOAD May 14 00:46:14.315000 audit: BPF prog-id=3 op=UNLOAD May 14 00:46:14.315000 audit: BPF prog-id=13 op=LOAD May 14 00:46:14.315000 audit: BPF prog-id=14 op=LOAD May 14 00:46:14.315000 audit: BPF prog-id=4 op=UNLOAD May 14 00:46:14.315000 audit: BPF prog-id=5 op=UNLOAD May 14 00:46:14.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.326000 audit: BPF prog-id=12 op=UNLOAD May 14 00:46:14.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.415000 audit: BPF prog-id=15 op=LOAD May 14 00:46:14.415000 audit: BPF prog-id=16 op=LOAD May 14 00:46:14.415000 audit: BPF prog-id=17 op=LOAD May 14 00:46:14.415000 audit: BPF prog-id=13 op=UNLOAD May 14 00:46:14.415000 audit: BPF prog-id=14 op=UNLOAD May 14 00:46:14.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.437000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 14 00:46:14.437000 audit[997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe12c21b0 a2=4000 a3=1 items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:46:14.437000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 14 00:46:14.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.642925 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:46:14.440734 systemd[1]: Started systemd-journald.service. May 14 00:46:14.314058 systemd[1]: Queued start job for default target multi-user.target. May 14 00:46:12.643227 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 14 00:46:14.314071 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 14 00:46:12.643245 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 14 00:46:14.317132 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:46:12.643286 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 14 00:46:12.643296 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=debug msg="skipped missing lower profile" missing profile=oem May 14 00:46:12.643323 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 14 00:46:14.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:12.643334 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 14 00:46:12.643521 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 14 00:46:12.643560 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 14 00:46:12.643571 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 14 00:46:14.441653 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:46:12.644010 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 14 00:46:12.644046 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 14 00:46:12.644063 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 14 00:46:14.441827 systemd[1]: Finished modprobe@configfs.service. May 14 00:46:12.644078 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 14 00:46:12.644094 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 14 00:46:12.644108 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 14 00:46:14.070151 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:14Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:46:14.070441 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:14Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:46:14.070543 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:14Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:46:14.070700 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:14Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:46:14.070751 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:14Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 14 00:46:14.070809 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-14T00:46:14Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 14 00:46:14.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.442939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:46:14.443089 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:46:14.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.443974 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:46:14.444133 systemd[1]: Finished modprobe@drm.service. May 14 00:46:14.445005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:46:14.445155 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:46:14.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.446075 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:46:14.446227 systemd[1]: Finished modprobe@fuse.service. May 14 00:46:14.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.447058 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:46:14.447228 systemd[1]: Finished modprobe@loop.service. May 14 00:46:14.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.448311 systemd[1]: Finished systemd-modules-load.service. May 14 00:46:14.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.449210 systemd[1]: Finished systemd-network-generator.service. May 14 00:46:14.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.450319 systemd[1]: Finished flatcar-tmpfiles.service. May 14 00:46:14.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.451163 systemd[1]: Finished systemd-remount-fs.service. May 14 00:46:14.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.452427 systemd[1]: Reached target network-pre.target. May 14 00:46:14.454122 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 14 00:46:14.456046 systemd[1]: Mounting sys-kernel-config.mount... May 14 00:46:14.456802 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:46:14.459922 systemd[1]: Starting systemd-hwdb-update.service... May 14 00:46:14.461618 systemd[1]: Starting systemd-journal-flush.service... May 14 00:46:14.462321 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:46:14.463459 systemd[1]: Starting systemd-random-seed.service... May 14 00:46:14.464152 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:46:14.465146 systemd[1]: Starting systemd-sysctl.service... May 14 00:46:14.466930 systemd[1]: Starting systemd-sysusers.service... May 14 00:46:14.470481 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 14 00:46:14.471530 systemd[1]: Mounted sys-kernel-config.mount. May 14 00:46:14.472172 systemd-journald[997]: Time spent on flushing to /var/log/journal/d84947feae264bc6ab0a9693c276cef7 is 11.876ms for 992 entries. May 14 00:46:14.472172 systemd-journald[997]: System Journal (/var/log/journal/d84947feae264bc6ab0a9693c276cef7) is 8.0M, max 195.6M, 187.6M free. May 14 00:46:14.493038 systemd-journald[997]: Received client request to flush runtime journal. May 14 00:46:14.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.477359 systemd[1]: Finished systemd-random-seed.service. May 14 00:46:14.478078 systemd[1]: Reached target first-boot-complete.target. May 14 00:46:14.486222 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:46:14.487857 systemd[1]: Starting systemd-udev-settle.service... May 14 00:46:14.494729 systemd[1]: Finished systemd-journal-flush.service. May 14 00:46:14.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.496509 udevadm[1027]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 00:46:14.497839 systemd[1]: Finished systemd-sysctl.service. May 14 00:46:14.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.499655 systemd[1]: Finished systemd-sysusers.service. May 14 00:46:14.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.833954 systemd[1]: Finished systemd-hwdb-update.service. May 14 00:46:14.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.835000 audit: BPF prog-id=18 op=LOAD May 14 00:46:14.835000 audit: BPF prog-id=19 op=LOAD May 14 00:46:14.835000 audit: BPF prog-id=7 op=UNLOAD May 14 00:46:14.835000 audit: BPF prog-id=8 op=UNLOAD May 14 00:46:14.836000 systemd[1]: Starting systemd-udevd.service... May 14 00:46:14.855029 systemd-udevd[1030]: Using default interface naming scheme 'v252'. May 14 00:46:14.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.869692 systemd[1]: Started systemd-udevd.service. May 14 00:46:14.870000 audit: BPF prog-id=20 op=LOAD May 14 00:46:14.871696 systemd[1]: Starting systemd-networkd.service... May 14 00:46:14.881000 audit: BPF prog-id=21 op=LOAD May 14 00:46:14.881000 audit: BPF prog-id=22 op=LOAD May 14 00:46:14.881000 audit: BPF prog-id=23 op=LOAD May 14 00:46:14.882686 systemd[1]: Starting systemd-userdbd.service... May 14 00:46:14.890805 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 14 00:46:14.922526 systemd[1]: Started systemd-userdbd.service. May 14 00:46:14.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.926702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:46:14.962655 systemd[1]: Finished systemd-udev-settle.service. May 14 00:46:14.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.964435 systemd[1]: Starting lvm2-activation-early.service... May 14 00:46:14.977811 lvm[1063]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:46:14.979395 systemd-networkd[1037]: lo: Link UP May 14 00:46:14.979406 systemd-networkd[1037]: lo: Gained carrier May 14 00:46:14.979742 systemd-networkd[1037]: Enumeration completed May 14 00:46:14.979849 systemd-networkd[1037]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:46:14.979855 systemd[1]: Started systemd-networkd.service. May 14 00:46:14.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:14.981378 systemd-networkd[1037]: eth0: Link UP May 14 00:46:14.981386 systemd-networkd[1037]: eth0: Gained carrier May 14 00:46:15.001354 systemd-networkd[1037]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:46:15.007071 systemd[1]: Finished lvm2-activation-early.service. May 14 00:46:15.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.007872 systemd[1]: Reached target cryptsetup.target. May 14 00:46:15.009473 systemd[1]: Starting lvm2-activation.service... May 14 00:46:15.012700 lvm[1064]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:46:15.040065 systemd[1]: Finished lvm2-activation.service. May 14 00:46:15.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.040798 systemd[1]: Reached target local-fs-pre.target. May 14 00:46:15.041432 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:46:15.041463 systemd[1]: Reached target local-fs.target. May 14 00:46:15.042010 systemd[1]: Reached target machines.target. May 14 00:46:15.043613 systemd[1]: Starting ldconfig.service... May 14 00:46:15.044528 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:46:15.044578 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:46:15.045700 systemd[1]: Starting systemd-boot-update.service... May 14 00:46:15.048135 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 14 00:46:15.050150 systemd[1]: Starting systemd-machine-id-commit.service... May 14 00:46:15.052525 systemd[1]: Starting systemd-sysext.service... May 14 00:46:15.053501 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1066 (bootctl) May 14 00:46:15.054536 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 14 00:46:15.062684 systemd[1]: Unmounting usr-share-oem.mount... May 14 00:46:15.066476 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 14 00:46:15.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.072342 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 14 00:46:15.072520 systemd[1]: Unmounted usr-share-oem.mount. May 14 00:46:15.085274 kernel: loop0: detected capacity change from 0 to 189592 May 14 00:46:15.123028 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:46:15.123833 systemd[1]: Finished systemd-machine-id-commit.service. May 14 00:46:15.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.130295 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:46:15.150797 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) May 14 00:46:15.150797 systemd-fsck[1079]: /dev/vda1: 236 files, 117310/258078 clusters May 14 00:46:15.153392 kernel: loop1: detected capacity change from 0 to 189592 May 14 00:46:15.153854 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 14 00:46:15.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.156312 systemd[1]: Mounting boot.mount... May 14 00:46:15.162990 systemd[1]: Mounted boot.mount. May 14 00:46:15.164748 (sd-sysext)[1082]: Using extensions 'kubernetes'. May 14 00:46:15.165088 (sd-sysext)[1082]: Merged extensions into '/usr'. May 14 00:46:15.173604 systemd[1]: Finished systemd-boot-update.service. May 14 00:46:15.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.180724 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:46:15.181832 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:46:15.183553 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:46:15.185069 systemd[1]: Starting modprobe@loop.service... May 14 00:46:15.185939 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:46:15.186055 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:46:15.186792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:46:15.186914 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:46:15.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.188054 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:46:15.188156 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:46:15.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.189343 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:46:15.189447 systemd[1]: Finished modprobe@loop.service. May 14 00:46:15.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.190471 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:46:15.190564 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:46:15.242027 ldconfig[1065]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:46:15.246336 systemd[1]: Finished ldconfig.service. May 14 00:46:15.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.433629 systemd[1]: Mounting usr-share-oem.mount... May 14 00:46:15.438522 systemd[1]: Mounted usr-share-oem.mount. May 14 00:46:15.440122 systemd[1]: Finished systemd-sysext.service. May 14 00:46:15.441954 systemd[1]: Starting ensure-sysext.service... May 14 00:46:15.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.443519 systemd[1]: Starting systemd-tmpfiles-setup.service... May 14 00:46:15.447756 systemd[1]: Reloading. May 14 00:46:15.461178 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 14 00:46:15.464105 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:46:15.467114 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:46:15.483320 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-05-14T00:46:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:46:15.483345 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-05-14T00:46:15Z" level=info msg="torcx already run" May 14 00:46:15.549491 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:46:15.549512 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:46:15.565124 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:46:15.609000 audit: BPF prog-id=24 op=LOAD May 14 00:46:15.609000 audit: BPF prog-id=21 op=UNLOAD May 14 00:46:15.609000 audit: BPF prog-id=25 op=LOAD May 14 00:46:15.610000 audit: BPF prog-id=26 op=LOAD May 14 00:46:15.610000 audit: BPF prog-id=22 op=UNLOAD May 14 00:46:15.610000 audit: BPF prog-id=23 op=UNLOAD May 14 00:46:15.610000 audit: BPF prog-id=27 op=LOAD May 14 00:46:15.611000 audit: BPF prog-id=28 op=LOAD May 14 00:46:15.611000 audit: BPF prog-id=18 op=UNLOAD May 14 00:46:15.611000 audit: BPF prog-id=19 op=UNLOAD May 14 00:46:15.611000 audit: BPF prog-id=29 op=LOAD May 14 00:46:15.611000 audit: BPF prog-id=20 op=UNLOAD May 14 00:46:15.613000 audit: BPF prog-id=30 op=LOAD May 14 00:46:15.613000 audit: BPF prog-id=15 op=UNLOAD May 14 00:46:15.613000 audit: BPF prog-id=31 op=LOAD May 14 00:46:15.613000 audit: BPF prog-id=32 op=LOAD May 14 00:46:15.613000 audit: BPF prog-id=16 op=UNLOAD May 14 00:46:15.613000 audit: BPF prog-id=17 op=UNLOAD May 14 00:46:15.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.615898 systemd[1]: Finished systemd-tmpfiles-setup.service. May 14 00:46:15.621910 systemd[1]: Starting audit-rules.service... May 14 00:46:15.623720 systemd[1]: Starting clean-ca-certificates.service... May 14 00:46:15.625796 systemd[1]: Starting systemd-journal-catalog-update.service... May 14 00:46:15.627000 audit: BPF prog-id=33 op=LOAD May 14 00:46:15.628741 systemd[1]: Starting systemd-resolved.service... May 14 00:46:15.631000 audit: BPF prog-id=34 op=LOAD May 14 00:46:15.632137 systemd[1]: Starting systemd-timesyncd.service... May 14 00:46:15.635385 systemd[1]: Starting systemd-update-utmp.service... May 14 00:46:15.641860 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:46:15.652799 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:46:15.655087 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:46:15.657481 systemd[1]: Starting modprobe@loop.service... May 14 00:46:15.659000 audit[1155]: SYSTEM_BOOT pid=1155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 14 00:46:15.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.658335 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:46:15.658514 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:46:15.659629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:46:15.659828 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:46:15.660999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:46:15.661120 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:46:15.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.662323 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:46:15.662442 systemd[1]: Finished modprobe@loop.service. May 14 00:46:15.666480 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:46:15.668026 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:46:15.670629 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:46:15.672422 systemd[1]: Starting modprobe@loop.service... May 14 00:46:15.672998 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:46:15.673161 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:46:15.674396 systemd[1]: Finished clean-ca-certificates.service. May 14 00:46:15.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.675566 systemd[1]: Finished systemd-journal-catalog-update.service. May 14 00:46:15.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.676719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:46:15.676837 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:46:15.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.677903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:46:15.678010 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:46:15.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.679472 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:46:15.679583 systemd[1]: Finished modprobe@loop.service. May 14 00:46:15.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.682663 systemd[1]: Finished systemd-update-utmp.service. May 14 00:46:15.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.684920 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:46:15.686333 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:46:15.687927 systemd[1]: Starting modprobe@drm.service... May 14 00:46:15.689550 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:46:15.691116 systemd[1]: Starting modprobe@loop.service... May 14 00:46:15.692016 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:46:15.692076 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:46:15.693068 systemd[1]: Starting systemd-networkd-wait-online.service... May 14 00:46:15.694893 systemd[1]: Starting systemd-update-done.service... May 14 00:46:15.695627 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:46:15.696312 systemd[1]: Finished ensure-sysext.service. May 14 00:46:15.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.697190 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:46:15.697323 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:46:15.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.698157 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:46:15.698321 systemd[1]: Finished modprobe@drm.service. May 14 00:46:15.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.699135 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:46:15.699240 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:46:15.700223 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:46:15.700356 systemd[1]: Finished modprobe@loop.service. May 14 00:46:15.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.702058 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:46:15.702103 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:46:15.707667 systemd[1]: Finished systemd-update-done.service. May 14 00:46:15.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:46:15.719920 systemd[1]: Started systemd-timesyncd.service. May 14 00:46:15.719000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 14 00:46:15.719000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd6dff1a0 a2=420 a3=0 items=0 ppid=1148 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:46:15.719000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 14 00:46:15.720224 augenrules[1181]: No rules May 14 00:46:15.720807 systemd-timesyncd[1153]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:46:15.720863 systemd-timesyncd[1153]: Initial clock synchronization to Wed 2025-05-14 00:46:15.606164 UTC. May 14 00:46:15.720947 systemd[1]: Reached target time-set.target. May 14 00:46:15.721841 systemd[1]: Finished audit-rules.service. May 14 00:46:15.728760 systemd-resolved[1152]: Positive Trust Anchors: May 14 00:46:15.728980 systemd-resolved[1152]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:46:15.729057 systemd-resolved[1152]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:46:15.745462 systemd-resolved[1152]: Defaulting to hostname 'linux'. May 14 00:46:15.748699 systemd[1]: Started systemd-resolved.service. May 14 00:46:15.749377 systemd[1]: Reached target network.target. May 14 00:46:15.749937 systemd[1]: Reached target nss-lookup.target. May 14 00:46:15.750538 systemd[1]: Reached target sysinit.target. May 14 00:46:15.751176 systemd[1]: Started motdgen.path. May 14 00:46:15.751751 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 14 00:46:15.752682 systemd[1]: Started logrotate.timer. May 14 00:46:15.753323 systemd[1]: Started mdadm.timer. May 14 00:46:15.753821 systemd[1]: Started systemd-tmpfiles-clean.timer. May 14 00:46:15.754447 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:46:15.754477 systemd[1]: Reached target paths.target. May 14 00:46:15.754996 systemd[1]: Reached target timers.target. May 14 00:46:15.755832 systemd[1]: Listening on dbus.socket. May 14 00:46:15.757423 systemd[1]: Starting docker.socket... May 14 00:46:15.760412 systemd[1]: Listening on sshd.socket. May 14 00:46:15.761060 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:46:15.761524 systemd[1]: Listening on docker.socket. May 14 00:46:15.762143 systemd[1]: Reached target sockets.target. May 14 00:46:15.762736 systemd[1]: Reached target basic.target. May 14 00:46:15.763312 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:46:15.763341 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:46:15.764226 systemd[1]: Starting containerd.service... May 14 00:46:15.765770 systemd[1]: Starting dbus.service... May 14 00:46:15.767296 systemd[1]: Starting enable-oem-cloudinit.service... May 14 00:46:15.769024 systemd[1]: Starting extend-filesystems.service... May 14 00:46:15.769813 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 14 00:46:15.770822 systemd[1]: Starting motdgen.service... May 14 00:46:15.772042 jq[1191]: false May 14 00:46:15.772328 systemd[1]: Starting prepare-helm.service... May 14 00:46:15.774288 systemd[1]: Starting ssh-key-proc-cmdline.service... May 14 00:46:15.776709 systemd[1]: Starting sshd-keygen.service... May 14 00:46:15.779340 systemd[1]: Starting systemd-logind.service... May 14 00:46:15.779912 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:46:15.779981 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:46:15.780390 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:46:15.780985 systemd[1]: Starting update-engine.service... May 14 00:46:15.785144 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 14 00:46:15.787447 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:46:15.787640 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 14 00:46:15.788369 jq[1209]: true May 14 00:46:15.788638 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:46:15.788795 systemd[1]: Finished ssh-key-proc-cmdline.service. May 14 00:46:15.794426 extend-filesystems[1192]: Found loop1 May 14 00:46:15.794426 extend-filesystems[1192]: Found vda May 14 00:46:15.794426 extend-filesystems[1192]: Found vda1 May 14 00:46:15.794426 extend-filesystems[1192]: Found vda2 May 14 00:46:15.794426 extend-filesystems[1192]: Found vda3 May 14 00:46:15.794426 extend-filesystems[1192]: Found usr May 14 00:46:15.801908 extend-filesystems[1192]: Found vda4 May 14 00:46:15.801908 extend-filesystems[1192]: Found vda6 May 14 00:46:15.801908 extend-filesystems[1192]: Found vda7 May 14 00:46:15.801908 extend-filesystems[1192]: Found vda9 May 14 00:46:15.801908 extend-filesystems[1192]: Checking size of /dev/vda9 May 14 00:46:15.810607 jq[1212]: true May 14 00:46:15.810107 systemd[1]: Started dbus.service. May 14 00:46:15.809921 dbus-daemon[1190]: [system] SELinux support is enabled May 14 00:46:15.811285 tar[1211]: linux-arm64/helm May 14 00:46:15.812618 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:46:15.812638 systemd[1]: Reached target system-config.target. May 14 00:46:15.813332 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:46:15.813358 systemd[1]: Reached target user-config.target. May 14 00:46:15.816780 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:46:15.816933 systemd[1]: Finished motdgen.service. May 14 00:46:15.819294 extend-filesystems[1192]: Resized partition /dev/vda9 May 14 00:46:15.838098 extend-filesystems[1231]: resize2fs 1.46.5 (30-Dec-2021) May 14 00:46:15.847193 systemd-logind[1202]: Watching system buttons on /dev/input/event0 (Power Button) May 14 00:46:15.851802 systemd-logind[1202]: New seat seat0. May 14 00:46:15.855791 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:46:15.853814 systemd[1]: Started systemd-logind.service. May 14 00:46:15.884214 update_engine[1204]: I0514 00:46:15.879348 1204 main.cc:92] Flatcar Update Engine starting May 14 00:46:15.891699 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:46:15.890757 systemd[1]: Started update-engine.service. May 14 00:46:15.893217 systemd[1]: Started locksmithd.service. May 14 00:46:15.912758 extend-filesystems[1231]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:46:15.912758 extend-filesystems[1231]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:46:15.912758 extend-filesystems[1231]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:46:15.918127 update_engine[1204]: I0514 00:46:15.894294 1204 update_check_scheduler.cc:74] Next update check in 2m27s May 14 00:46:15.918198 bash[1241]: Updated "/home/core/.ssh/authorized_keys" May 14 00:46:15.915196 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:46:15.918353 env[1213]: time="2025-05-14T00:46:15.913222600Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 14 00:46:15.918511 extend-filesystems[1192]: Resized filesystem in /dev/vda9 May 14 00:46:15.915384 systemd[1]: Finished extend-filesystems.service. May 14 00:46:15.917274 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 14 00:46:15.943006 env[1213]: time="2025-05-14T00:46:15.942916920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 00:46:15.943091 env[1213]: time="2025-05-14T00:46:15.943070360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 00:46:15.948165 env[1213]: time="2025-05-14T00:46:15.945355280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 00:46:15.948165 env[1213]: time="2025-05-14T00:46:15.945384160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 00:46:15.948165 env[1213]: time="2025-05-14T00:46:15.945584960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:46:15.948165 env[1213]: time="2025-05-14T00:46:15.945601720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 00:46:15.948165 env[1213]: time="2025-05-14T00:46:15.945615480Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 14 00:46:15.948165 env[1213]: time="2025-05-14T00:46:15.945625200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 00:46:15.948165 env[1213]: time="2025-05-14T00:46:15.945694120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 00:46:15.948165 env[1213]: time="2025-05-14T00:46:15.946055560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 00:46:15.948165 env[1213]: time="2025-05-14T00:46:15.946168680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:46:15.948165 env[1213]: time="2025-05-14T00:46:15.946183800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 00:46:15.948946 env[1213]: time="2025-05-14T00:46:15.946245000Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 14 00:46:15.948946 env[1213]: time="2025-05-14T00:46:15.946275760Z" level=info msg="metadata content store policy set" policy=shared May 14 00:46:15.952004 env[1213]: time="2025-05-14T00:46:15.951965960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 00:46:15.952004 env[1213]: time="2025-05-14T00:46:15.951998440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 00:46:15.952105 env[1213]: time="2025-05-14T00:46:15.952011880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 00:46:15.952105 env[1213]: time="2025-05-14T00:46:15.952042920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 00:46:15.952105 env[1213]: time="2025-05-14T00:46:15.952057920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 00:46:15.952105 env[1213]: time="2025-05-14T00:46:15.952071680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 00:46:15.952105 env[1213]: time="2025-05-14T00:46:15.952085240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 00:46:15.952468 env[1213]: time="2025-05-14T00:46:15.952441680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 00:46:15.952468 env[1213]: time="2025-05-14T00:46:15.952468280Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 14 00:46:15.952647 env[1213]: time="2025-05-14T00:46:15.952482640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 00:46:15.952647 env[1213]: time="2025-05-14T00:46:15.952496400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 00:46:15.952647 env[1213]: time="2025-05-14T00:46:15.952508880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 00:46:15.952647 env[1213]: time="2025-05-14T00:46:15.952619920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 00:46:15.952744 env[1213]: time="2025-05-14T00:46:15.952691240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 00:46:15.952988 env[1213]: time="2025-05-14T00:46:15.952887400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 00:46:15.952988 env[1213]: time="2025-05-14T00:46:15.952921560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 00:46:15.952988 env[1213]: time="2025-05-14T00:46:15.952936080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953083960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953100200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953112520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953128800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953140600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953153920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953165320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953176760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953190400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953342960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953360040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953372120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953384000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 00:46:15.956794 env[1213]: time="2025-05-14T00:46:15.953400440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 14 00:46:15.955153 systemd[1]: Started containerd.service. May 14 00:46:15.957137 env[1213]: time="2025-05-14T00:46:15.953411960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 00:46:15.957137 env[1213]: time="2025-05-14T00:46:15.953429440Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 14 00:46:15.957137 env[1213]: time="2025-05-14T00:46:15.953463280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.953651640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.953703800Z" level=info msg="Connect containerd service" May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.953733800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.954513840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.954983760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.955021080Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.956034320Z" level=info msg="containerd successfully booted in 0.050286s" May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.956322520Z" level=info msg="Start subscribing containerd event" May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.956382960Z" level=info msg="Start recovering state" May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.956448640Z" level=info msg="Start event monitor" May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.956468280Z" level=info msg="Start snapshots syncer" May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.956480240Z" level=info msg="Start cni network conf syncer for default" May 14 00:46:15.957197 env[1213]: time="2025-05-14T00:46:15.956488560Z" level=info msg="Start streaming server" May 14 00:46:15.993452 locksmithd[1242]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:46:16.222759 tar[1211]: linux-arm64/LICENSE May 14 00:46:16.222863 tar[1211]: linux-arm64/README.md May 14 00:46:16.227198 systemd[1]: Finished prepare-helm.service. May 14 00:46:16.585417 systemd-networkd[1037]: eth0: Gained IPv6LL May 14 00:46:16.587299 systemd[1]: Finished systemd-networkd-wait-online.service. May 14 00:46:16.588218 systemd[1]: Reached target network-online.target. May 14 00:46:16.590244 systemd[1]: Starting kubelet.service... May 14 00:46:17.103983 systemd[1]: Started kubelet.service. May 14 00:46:17.538379 kubelet[1258]: E0514 00:46:17.538289 1258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:46:17.540594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:46:17.540720 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:46:18.687103 sshd_keygen[1214]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:46:18.704174 systemd[1]: Finished sshd-keygen.service. May 14 00:46:18.706266 systemd[1]: Starting issuegen.service... May 14 00:46:18.710962 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:46:18.711116 systemd[1]: Finished issuegen.service. May 14 00:46:18.713192 systemd[1]: Starting systemd-user-sessions.service... May 14 00:46:18.718831 systemd[1]: Finished systemd-user-sessions.service. May 14 00:46:18.720799 systemd[1]: Started getty@tty1.service. May 14 00:46:18.722593 systemd[1]: Started serial-getty@ttyAMA0.service. May 14 00:46:18.723421 systemd[1]: Reached target getty.target. May 14 00:46:18.724049 systemd[1]: Reached target multi-user.target. May 14 00:46:18.725780 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 14 00:46:18.731912 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 14 00:46:18.732078 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 14 00:46:18.732965 systemd[1]: Startup finished in 551ms (kernel) + 4.891s (initrd) + 6.272s (userspace) = 11.715s. May 14 00:46:20.245368 systemd[1]: Created slice system-sshd.slice. May 14 00:46:20.246765 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:43182.service. May 14 00:46:20.288506 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 43182 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:46:20.290544 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:46:20.301277 systemd-logind[1202]: New session 1 of user core. May 14 00:46:20.302204 systemd[1]: Created slice user-500.slice. May 14 00:46:20.303368 systemd[1]: Starting user-runtime-dir@500.service... May 14 00:46:20.312024 systemd[1]: Finished user-runtime-dir@500.service. May 14 00:46:20.313327 systemd[1]: Starting user@500.service... May 14 00:46:20.316197 (systemd)[1283]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:46:20.376089 systemd[1283]: Queued start job for default target default.target. May 14 00:46:20.376600 systemd[1283]: Reached target paths.target. May 14 00:46:20.376632 systemd[1283]: Reached target sockets.target. May 14 00:46:20.376644 systemd[1283]: Reached target timers.target. May 14 00:46:20.376653 systemd[1283]: Reached target basic.target. May 14 00:46:20.376694 systemd[1283]: Reached target default.target. May 14 00:46:20.376718 systemd[1283]: Startup finished in 54ms. May 14 00:46:20.376790 systemd[1]: Started user@500.service. May 14 00:46:20.377835 systemd[1]: Started session-1.scope. May 14 00:46:20.428656 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:43196.service. May 14 00:46:20.479372 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 43196 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:46:20.480676 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:46:20.484058 systemd-logind[1202]: New session 2 of user core. May 14 00:46:20.485333 systemd[1]: Started session-2.scope. May 14 00:46:20.537689 sshd[1292]: pam_unix(sshd:session): session closed for user core May 14 00:46:20.540436 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:43196.service: Deactivated successfully. May 14 00:46:20.541043 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:46:20.541554 systemd-logind[1202]: Session 2 logged out. Waiting for processes to exit. May 14 00:46:20.542607 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:43200.service. May 14 00:46:20.543227 systemd-logind[1202]: Removed session 2. May 14 00:46:20.578493 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 43200 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:46:20.579514 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:46:20.582530 systemd-logind[1202]: New session 3 of user core. May 14 00:46:20.583293 systemd[1]: Started session-3.scope. May 14 00:46:20.631311 sshd[1298]: pam_unix(sshd:session): session closed for user core May 14 00:46:20.635054 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:43200.service: Deactivated successfully. May 14 00:46:20.635605 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:46:20.636150 systemd-logind[1202]: Session 3 logged out. Waiting for processes to exit. May 14 00:46:20.637193 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:43206.service. May 14 00:46:20.637886 systemd-logind[1202]: Removed session 3. May 14 00:46:20.672623 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 43206 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:46:20.673745 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:46:20.676748 systemd-logind[1202]: New session 4 of user core. May 14 00:46:20.677505 systemd[1]: Started session-4.scope. May 14 00:46:20.730796 sshd[1304]: pam_unix(sshd:session): session closed for user core May 14 00:46:20.734000 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:43212.service. May 14 00:46:20.734527 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:43206.service: Deactivated successfully. May 14 00:46:20.735129 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:46:20.735573 systemd-logind[1202]: Session 4 logged out. Waiting for processes to exit. May 14 00:46:20.736157 systemd-logind[1202]: Removed session 4. May 14 00:46:20.769416 sshd[1309]: Accepted publickey for core from 10.0.0.1 port 43212 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:46:20.770505 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:46:20.773430 systemd-logind[1202]: New session 5 of user core. May 14 00:46:20.774170 systemd[1]: Started session-5.scope. May 14 00:46:20.833375 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:46:20.834204 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 14 00:46:20.892623 systemd[1]: Starting docker.service... May 14 00:46:20.985316 env[1325]: time="2025-05-14T00:46:20.985259621Z" level=info msg="Starting up" May 14 00:46:20.987180 env[1325]: time="2025-05-14T00:46:20.987149144Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 14 00:46:20.987180 env[1325]: time="2025-05-14T00:46:20.987174450Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 14 00:46:20.987287 env[1325]: time="2025-05-14T00:46:20.987201064Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 14 00:46:20.987287 env[1325]: time="2025-05-14T00:46:20.987219112Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 14 00:46:20.989620 env[1325]: time="2025-05-14T00:46:20.989516740Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 14 00:46:20.989620 env[1325]: time="2025-05-14T00:46:20.989536373Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 14 00:46:20.989620 env[1325]: time="2025-05-14T00:46:20.989551644Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 14 00:46:20.989620 env[1325]: time="2025-05-14T00:46:20.989561005Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 14 00:46:20.994002 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1111180925-merged.mount: Deactivated successfully. May 14 00:46:21.138134 env[1325]: time="2025-05-14T00:46:21.138035863Z" level=info msg="Loading containers: start." May 14 00:46:21.259283 kernel: Initializing XFRM netlink socket May 14 00:46:21.282259 env[1325]: time="2025-05-14T00:46:21.282203897Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 14 00:46:21.344806 systemd-networkd[1037]: docker0: Link UP May 14 00:46:21.365402 env[1325]: time="2025-05-14T00:46:21.365364587Z" level=info msg="Loading containers: done." May 14 00:46:21.383572 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1839764308-merged.mount: Deactivated successfully. May 14 00:46:21.387275 env[1325]: time="2025-05-14T00:46:21.387220453Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:46:21.387409 env[1325]: time="2025-05-14T00:46:21.387389601Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 14 00:46:21.387511 env[1325]: time="2025-05-14T00:46:21.387486087Z" level=info msg="Daemon has completed initialization" May 14 00:46:21.399633 systemd[1]: Started docker.service. May 14 00:46:21.408393 env[1325]: time="2025-05-14T00:46:21.408277669Z" level=info msg="API listen on /run/docker.sock" May 14 00:46:22.182819 env[1213]: time="2025-05-14T00:46:22.182777203Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 00:46:22.762733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549348597.mount: Deactivated successfully. May 14 00:46:24.508026 env[1213]: time="2025-05-14T00:46:24.507975429Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:24.509479 env[1213]: time="2025-05-14T00:46:24.509442812Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:24.511568 env[1213]: time="2025-05-14T00:46:24.511541912Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:24.513684 env[1213]: time="2025-05-14T00:46:24.513653827Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:24.514450 env[1213]: time="2025-05-14T00:46:24.514418486Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 00:46:24.515638 env[1213]: time="2025-05-14T00:46:24.515614650Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 00:46:26.158862 env[1213]: time="2025-05-14T00:46:26.158808440Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:26.160363 env[1213]: time="2025-05-14T00:46:26.160323630Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:26.162067 env[1213]: time="2025-05-14T00:46:26.162040338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:26.163754 env[1213]: time="2025-05-14T00:46:26.163730466Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:26.164612 env[1213]: time="2025-05-14T00:46:26.164572680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 00:46:26.165027 env[1213]: time="2025-05-14T00:46:26.165004687Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 00:46:27.668357 env[1213]: time="2025-05-14T00:46:27.668309482Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:27.671290 env[1213]: time="2025-05-14T00:46:27.671262953Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:27.672952 env[1213]: time="2025-05-14T00:46:27.672922225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:27.674472 env[1213]: time="2025-05-14T00:46:27.674446105Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:27.675325 env[1213]: time="2025-05-14T00:46:27.675292186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 00:46:27.675811 env[1213]: time="2025-05-14T00:46:27.675790179Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 00:46:27.791495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:46:27.791658 systemd[1]: Stopped kubelet.service. May 14 00:46:27.793055 systemd[1]: Starting kubelet.service... May 14 00:46:27.877581 systemd[1]: Started kubelet.service. May 14 00:46:27.914941 kubelet[1460]: E0514 00:46:27.914894 1460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:46:27.917482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:46:27.917610 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:46:28.892278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604189558.mount: Deactivated successfully. May 14 00:46:29.465368 env[1213]: time="2025-05-14T00:46:29.465315361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:29.467571 env[1213]: time="2025-05-14T00:46:29.467536460Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:29.469786 env[1213]: time="2025-05-14T00:46:29.469736533Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:29.471621 env[1213]: time="2025-05-14T00:46:29.471588806Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:29.472293 env[1213]: time="2025-05-14T00:46:29.472241714Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 00:46:29.472853 env[1213]: time="2025-05-14T00:46:29.472818135Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 00:46:30.008699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076244238.mount: Deactivated successfully. May 14 00:46:30.824717 env[1213]: time="2025-05-14T00:46:30.824649810Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:30.830427 env[1213]: time="2025-05-14T00:46:30.830386632Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:30.832182 env[1213]: time="2025-05-14T00:46:30.832153880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:30.834038 env[1213]: time="2025-05-14T00:46:30.834010251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:30.835663 env[1213]: time="2025-05-14T00:46:30.835623561Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 00:46:30.836098 env[1213]: time="2025-05-14T00:46:30.836058598Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 00:46:31.292657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880303635.mount: Deactivated successfully. May 14 00:46:31.298478 env[1213]: time="2025-05-14T00:46:31.298410237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:31.300319 env[1213]: time="2025-05-14T00:46:31.300282373Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:31.302316 env[1213]: time="2025-05-14T00:46:31.302278468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:31.303818 env[1213]: time="2025-05-14T00:46:31.303779042Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:31.304217 env[1213]: time="2025-05-14T00:46:31.304183379Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 00:46:31.304789 env[1213]: time="2025-05-14T00:46:31.304726288Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 00:46:31.799037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638826507.mount: Deactivated successfully. May 14 00:46:34.903967 env[1213]: time="2025-05-14T00:46:34.903915405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:34.906123 env[1213]: time="2025-05-14T00:46:34.906085072Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:34.907946 env[1213]: time="2025-05-14T00:46:34.907919095Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:34.909843 env[1213]: time="2025-05-14T00:46:34.909817194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:34.911657 env[1213]: time="2025-05-14T00:46:34.911625690Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 00:46:38.168443 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:46:38.168616 systemd[1]: Stopped kubelet.service. May 14 00:46:38.169952 systemd[1]: Starting kubelet.service... May 14 00:46:38.256491 systemd[1]: Started kubelet.service. May 14 00:46:38.287221 kubelet[1492]: E0514 00:46:38.287186 1492 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:46:38.289049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:46:38.289168 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:46:39.308804 systemd[1]: Stopped kubelet.service. May 14 00:46:39.310712 systemd[1]: Starting kubelet.service... May 14 00:46:39.333932 systemd[1]: Reloading. May 14 00:46:39.384683 /usr/lib/systemd/system-generators/torcx-generator[1527]: time="2025-05-14T00:46:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:46:39.384711 /usr/lib/systemd/system-generators/torcx-generator[1527]: time="2025-05-14T00:46:39Z" level=info msg="torcx already run" May 14 00:46:39.522901 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:46:39.522922 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:46:39.537949 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:46:39.601085 systemd[1]: Started kubelet.service. May 14 00:46:39.604004 systemd[1]: Stopping kubelet.service... May 14 00:46:39.604375 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:46:39.604543 systemd[1]: Stopped kubelet.service. May 14 00:46:39.606025 systemd[1]: Starting kubelet.service... May 14 00:46:39.690687 systemd[1]: Started kubelet.service. May 14 00:46:39.735053 kubelet[1573]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:46:39.735053 kubelet[1573]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:46:39.735053 kubelet[1573]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:46:39.735497 kubelet[1573]: I0514 00:46:39.735429 1573 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:46:40.729131 kubelet[1573]: I0514 00:46:40.729090 1573 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 00:46:40.729334 kubelet[1573]: I0514 00:46:40.729320 1573 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:46:40.729652 kubelet[1573]: I0514 00:46:40.729633 1573 server.go:929] "Client rotation is on, will bootstrap in background" May 14 00:46:40.769615 kubelet[1573]: E0514 00:46:40.769573 1573 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 14 00:46:40.771796 kubelet[1573]: I0514 00:46:40.771749 1573 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:46:40.780504 kubelet[1573]: E0514 00:46:40.780461 1573 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 00:46:40.780504 kubelet[1573]: I0514 00:46:40.780496 1573 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 00:46:40.786691 kubelet[1573]: I0514 00:46:40.786660 1573 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:46:40.787758 kubelet[1573]: I0514 00:46:40.787730 1573 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 00:46:40.787915 kubelet[1573]: I0514 00:46:40.787878 1573 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:46:40.788089 kubelet[1573]: I0514 00:46:40.787910 1573 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:46:40.788239 kubelet[1573]: I0514 00:46:40.788227 1573 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:46:40.788239 kubelet[1573]: I0514 00:46:40.788240 1573 container_manager_linux.go:300] "Creating device plugin manager" May 14 00:46:40.788439 kubelet[1573]: I0514 00:46:40.788426 1573 state_mem.go:36] "Initialized new in-memory state store" May 14 00:46:40.791958 kubelet[1573]: I0514 00:46:40.791931 1573 kubelet.go:408] "Attempting to sync node with API server" May 14 00:46:40.791958 kubelet[1573]: I0514 00:46:40.791964 1573 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:46:40.792121 kubelet[1573]: I0514 00:46:40.792109 1573 kubelet.go:314] "Adding apiserver pod source" May 14 00:46:40.793558 kubelet[1573]: I0514 00:46:40.793539 1573 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:46:40.793634 kubelet[1573]: W0514 00:46:40.793592 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused May 14 00:46:40.793678 kubelet[1573]: E0514 00:46:40.793650 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 14 00:46:40.794017 kubelet[1573]: W0514 00:46:40.793983 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused May 14 00:46:40.794065 kubelet[1573]: E0514 00:46:40.794026 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 14 00:46:40.797863 kubelet[1573]: I0514 00:46:40.797840 1573 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:46:40.799668 kubelet[1573]: I0514 00:46:40.799643 1573 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:46:40.800356 kubelet[1573]: W0514 00:46:40.800338 1573 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:46:40.801032 kubelet[1573]: I0514 00:46:40.801012 1573 server.go:1269] "Started kubelet" May 14 00:46:40.802193 kubelet[1573]: I0514 00:46:40.802137 1573 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:46:40.802440 kubelet[1573]: I0514 00:46:40.802423 1573 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:46:40.802574 kubelet[1573]: I0514 00:46:40.802557 1573 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:46:40.803040 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 14 00:46:40.803149 kubelet[1573]: I0514 00:46:40.803129 1573 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:46:40.803572 kubelet[1573]: I0514 00:46:40.803550 1573 server.go:460] "Adding debug handlers to kubelet server" May 14 00:46:40.804671 kubelet[1573]: I0514 00:46:40.804644 1573 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:46:40.805055 kubelet[1573]: E0514 00:46:40.803823 1573 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3e3a5b21729d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:46:40.800985757 +0000 UTC m=+1.104559539,LastTimestamp:2025-05-14 00:46:40.800985757 +0000 UTC m=+1.104559539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:46:40.805344 kubelet[1573]: I0514 00:46:40.805325 1573 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 00:46:40.805414 kubelet[1573]: I0514 00:46:40.805399 1573 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 00:46:40.805476 kubelet[1573]: I0514 00:46:40.805463 1573 reconciler.go:26] "Reconciler: start to sync state" May 14 00:46:40.806599 kubelet[1573]: W0514 00:46:40.805690 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused May 14 00:46:40.806710 kubelet[1573]: E0514 00:46:40.806691 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 14 00:46:40.807129 kubelet[1573]: I0514 00:46:40.807105 1573 factory.go:221] Registration of the systemd container factory successfully May 14 00:46:40.807400 kubelet[1573]: I0514 00:46:40.807378 1573 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:46:40.808597 kubelet[1573]: E0514 00:46:40.808555 1573 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:46:40.808670 kubelet[1573]: E0514 00:46:40.808649 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" May 14 00:46:40.809089 kubelet[1573]: E0514 00:46:40.809040 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:46:40.809550 kubelet[1573]: I0514 00:46:40.809523 1573 factory.go:221] Registration of the containerd container factory successfully May 14 00:46:40.819200 kubelet[1573]: I0514 00:46:40.819180 1573 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:46:40.819200 kubelet[1573]: I0514 00:46:40.819197 1573 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:46:40.819357 kubelet[1573]: I0514 00:46:40.819213 1573 state_mem.go:36] "Initialized new in-memory state store" May 14 00:46:40.845497 kubelet[1573]: I0514 00:46:40.845454 1573 policy_none.go:49] "None policy: Start" May 14 00:46:40.846162 kubelet[1573]: I0514 00:46:40.846138 1573 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:46:40.846213 kubelet[1573]: I0514 00:46:40.846172 1573 state_mem.go:35] "Initializing new in-memory state store" May 14 00:46:40.851336 kubelet[1573]: I0514 00:46:40.851300 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:46:40.852350 kubelet[1573]: I0514 00:46:40.852332 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:46:40.852350 kubelet[1573]: I0514 00:46:40.852353 1573 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:46:40.852442 kubelet[1573]: I0514 00:46:40.852370 1573 kubelet.go:2321] "Starting kubelet main sync loop" May 14 00:46:40.852442 kubelet[1573]: E0514 00:46:40.852413 1573 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:46:40.852994 kubelet[1573]: W0514 00:46:40.852946 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused May 14 00:46:40.853120 kubelet[1573]: E0514 00:46:40.853096 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 14 00:46:40.853714 systemd[1]: Created slice kubepods.slice. May 14 00:46:40.858030 systemd[1]: Created slice kubepods-burstable.slice. May 14 00:46:40.860579 systemd[1]: Created slice kubepods-besteffort.slice. May 14 00:46:40.871025 kubelet[1573]: I0514 00:46:40.871000 1573 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:46:40.871170 kubelet[1573]: I0514 00:46:40.871150 1573 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:46:40.871215 kubelet[1573]: I0514 00:46:40.871167 1573 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:46:40.871458 kubelet[1573]: I0514 00:46:40.871432 1573 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:46:40.872201 kubelet[1573]: E0514 00:46:40.872179 1573 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:46:40.959896 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 00:46:40.970703 systemd[1]: Created slice kubepods-burstable-pod58d0cf11abf6bff783769d6cfc792e41.slice. May 14 00:46:40.972245 kubelet[1573]: I0514 00:46:40.972213 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:46:40.972642 kubelet[1573]: E0514 00:46:40.972620 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" May 14 00:46:40.984190 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 00:46:41.009077 kubelet[1573]: E0514 00:46:41.009029 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" May 14 00:46:41.107301 kubelet[1573]: I0514 00:46:41.107265 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58d0cf11abf6bff783769d6cfc792e41-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"58d0cf11abf6bff783769d6cfc792e41\") " pod="kube-system/kube-apiserver-localhost" May 14 00:46:41.107376 kubelet[1573]: I0514 00:46:41.107304 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:46:41.107376 kubelet[1573]: I0514 00:46:41.107326 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:46:41.107430 kubelet[1573]: I0514 00:46:41.107383 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:46:41.107430 kubelet[1573]: I0514 00:46:41.107416 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:46:41.107472 kubelet[1573]: I0514 00:46:41.107436 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58d0cf11abf6bff783769d6cfc792e41-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"58d0cf11abf6bff783769d6cfc792e41\") " pod="kube-system/kube-apiserver-localhost" May 14 00:46:41.107472 kubelet[1573]: I0514 00:46:41.107453 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58d0cf11abf6bff783769d6cfc792e41-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"58d0cf11abf6bff783769d6cfc792e41\") " pod="kube-system/kube-apiserver-localhost" May 14 00:46:41.107517 kubelet[1573]: I0514 00:46:41.107478 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:46:41.107517 kubelet[1573]: I0514 00:46:41.107504 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 00:46:41.174418 kubelet[1573]: I0514 00:46:41.174388 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:46:41.174758 kubelet[1573]: E0514 00:46:41.174720 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" May 14 00:46:41.269088 kubelet[1573]: E0514 00:46:41.268991 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:41.269858 env[1213]: time="2025-05-14T00:46:41.269598680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 00:46:41.283444 kubelet[1573]: E0514 00:46:41.283416 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:41.283771 env[1213]: time="2025-05-14T00:46:41.283735169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:58d0cf11abf6bff783769d6cfc792e41,Namespace:kube-system,Attempt:0,}" May 14 00:46:41.288063 kubelet[1573]: E0514 00:46:41.288031 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:41.288508 env[1213]: time="2025-05-14T00:46:41.288303645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 00:46:41.409999 kubelet[1573]: E0514 00:46:41.409944 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" May 14 00:46:41.576534 kubelet[1573]: I0514 00:46:41.576443 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:46:41.576768 kubelet[1573]: E0514 00:46:41.576730 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" May 14 00:46:41.660732 kubelet[1573]: W0514 00:46:41.660663 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused May 14 00:46:41.660866 kubelet[1573]: E0514 00:46:41.660745 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 14 00:46:41.708633 kubelet[1573]: W0514 00:46:41.708542 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused May 14 00:46:41.708633 kubelet[1573]: E0514 00:46:41.708600 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 14 00:46:41.788305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114635735.mount: Deactivated successfully. May 14 00:46:41.792555 env[1213]: time="2025-05-14T00:46:41.792513943Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.794482 env[1213]: time="2025-05-14T00:46:41.794431527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.795376 env[1213]: time="2025-05-14T00:46:41.795337466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.796199 env[1213]: time="2025-05-14T00:46:41.796176520Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.798321 env[1213]: time="2025-05-14T00:46:41.798294802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.799741 env[1213]: time="2025-05-14T00:46:41.799684455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.800977 env[1213]: time="2025-05-14T00:46:41.800952131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.803356 env[1213]: time="2025-05-14T00:46:41.803322245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.804860 env[1213]: time="2025-05-14T00:46:41.804830918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.805796 env[1213]: time="2025-05-14T00:46:41.805769160Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.807129 env[1213]: time="2025-05-14T00:46:41.807103682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.808814 env[1213]: time="2025-05-14T00:46:41.808777071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:41.841713 env[1213]: time="2025-05-14T00:46:41.841592339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:46:41.841821 env[1213]: time="2025-05-14T00:46:41.841639395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:46:41.841821 env[1213]: time="2025-05-14T00:46:41.841658346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:46:41.841983 env[1213]: time="2025-05-14T00:46:41.841937364Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f044dbd6f5c287bd9b59f0689f284ff30d2174f52957ce6531063986fe1a93b4 pid=1625 runtime=io.containerd.runc.v2 May 14 00:46:41.844037 env[1213]: time="2025-05-14T00:46:41.843969130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:46:41.844037 env[1213]: time="2025-05-14T00:46:41.844001394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:46:41.844037 env[1213]: time="2025-05-14T00:46:41.844011709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:46:41.844236 env[1213]: time="2025-05-14T00:46:41.844179064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:46:41.844236 env[1213]: time="2025-05-14T00:46:41.844213046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:46:41.844236 env[1213]: time="2025-05-14T00:46:41.844224440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:46:41.844442 env[1213]: time="2025-05-14T00:46:41.844392115Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0347ff23276816c1d6eb472459d55a0bc09c6001b96c0bcae3edf43204bc3fd5 pid=1637 runtime=io.containerd.runc.v2 May 14 00:46:41.844483 env[1213]: time="2025-05-14T00:46:41.844435733Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/722581bfb06a1d625c413c88fd280e182992fc6ea099d8048e4020b68aa7e12e pid=1636 runtime=io.containerd.runc.v2 May 14 00:46:41.854337 systemd[1]: Started cri-containerd-f044dbd6f5c287bd9b59f0689f284ff30d2174f52957ce6531063986fe1a93b4.scope. May 14 00:46:41.864266 systemd[1]: Started cri-containerd-0347ff23276816c1d6eb472459d55a0bc09c6001b96c0bcae3edf43204bc3fd5.scope. May 14 00:46:41.865304 systemd[1]: Started cri-containerd-722581bfb06a1d625c413c88fd280e182992fc6ea099d8048e4020b68aa7e12e.scope. May 14 00:46:41.908802 env[1213]: time="2025-05-14T00:46:41.908748021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f044dbd6f5c287bd9b59f0689f284ff30d2174f52957ce6531063986fe1a93b4\"" May 14 00:46:41.912397 env[1213]: time="2025-05-14T00:46:41.912350908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:58d0cf11abf6bff783769d6cfc792e41,Namespace:kube-system,Attempt:0,} returns sandbox id \"0347ff23276816c1d6eb472459d55a0bc09c6001b96c0bcae3edf43204bc3fd5\"" May 14 00:46:41.913815 kubelet[1573]: E0514 00:46:41.913777 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:41.915666 kubelet[1573]: E0514 00:46:41.915648 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:41.919771 env[1213]: time="2025-05-14T00:46:41.919741909Z" level=info msg="CreateContainer within sandbox \"f044dbd6f5c287bd9b59f0689f284ff30d2174f52957ce6531063986fe1a93b4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:46:41.920017 env[1213]: time="2025-05-14T00:46:41.919900588Z" level=info msg="CreateContainer within sandbox \"0347ff23276816c1d6eb472459d55a0bc09c6001b96c0bcae3edf43204bc3fd5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:46:41.921060 env[1213]: time="2025-05-14T00:46:41.921032333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"722581bfb06a1d625c413c88fd280e182992fc6ea099d8048e4020b68aa7e12e\"" May 14 00:46:41.921823 kubelet[1573]: E0514 00:46:41.921798 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:41.923796 env[1213]: time="2025-05-14T00:46:41.923762064Z" level=info msg="CreateContainer within sandbox \"722581bfb06a1d625c413c88fd280e182992fc6ea099d8048e4020b68aa7e12e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:46:41.935800 env[1213]: time="2025-05-14T00:46:41.935764359Z" level=info msg="CreateContainer within sandbox \"f044dbd6f5c287bd9b59f0689f284ff30d2174f52957ce6531063986fe1a93b4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"beede9f2a0ca490427b75c5e83838e407e30a1bfdf2f62214a49ad876ab3098b\"" May 14 00:46:41.936595 env[1213]: time="2025-05-14T00:46:41.936541524Z" level=info msg="StartContainer for \"beede9f2a0ca490427b75c5e83838e407e30a1bfdf2f62214a49ad876ab3098b\"" May 14 00:46:41.939198 env[1213]: time="2025-05-14T00:46:41.939161951Z" level=info msg="CreateContainer within sandbox \"0347ff23276816c1d6eb472459d55a0bc09c6001b96c0bcae3edf43204bc3fd5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1db9ad7beaa22aad38a11309d063234a2adbc4ef03851bc32695de044f62b274\"" May 14 00:46:41.939644 env[1213]: time="2025-05-14T00:46:41.939618359Z" level=info msg="StartContainer for \"1db9ad7beaa22aad38a11309d063234a2adbc4ef03851bc32695de044f62b274\"" May 14 00:46:41.942665 env[1213]: time="2025-05-14T00:46:41.942629667Z" level=info msg="CreateContainer within sandbox \"722581bfb06a1d625c413c88fd280e182992fc6ea099d8048e4020b68aa7e12e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4ab3fa12c2e0382b7c849076ff6af37cbdd39aefdf2f521cc869f65b59d2c40c\"" May 14 00:46:41.943133 env[1213]: time="2025-05-14T00:46:41.943103626Z" level=info msg="StartContainer for \"4ab3fa12c2e0382b7c849076ff6af37cbdd39aefdf2f521cc869f65b59d2c40c\"" May 14 00:46:41.952885 systemd[1]: Started cri-containerd-beede9f2a0ca490427b75c5e83838e407e30a1bfdf2f62214a49ad876ab3098b.scope. May 14 00:46:41.958820 systemd[1]: Started cri-containerd-1db9ad7beaa22aad38a11309d063234a2adbc4ef03851bc32695de044f62b274.scope. May 14 00:46:41.962904 systemd[1]: Started cri-containerd-4ab3fa12c2e0382b7c849076ff6af37cbdd39aefdf2f521cc869f65b59d2c40c.scope. May 14 00:46:42.033850 env[1213]: time="2025-05-14T00:46:42.033807105Z" level=info msg="StartContainer for \"beede9f2a0ca490427b75c5e83838e407e30a1bfdf2f62214a49ad876ab3098b\" returns successfully" May 14 00:46:42.034450 env[1213]: time="2025-05-14T00:46:42.034397042Z" level=info msg="StartContainer for \"1db9ad7beaa22aad38a11309d063234a2adbc4ef03851bc32695de044f62b274\" returns successfully" May 14 00:46:42.039349 env[1213]: time="2025-05-14T00:46:42.037731039Z" level=info msg="StartContainer for \"4ab3fa12c2e0382b7c849076ff6af37cbdd39aefdf2f521cc869f65b59d2c40c\" returns successfully" May 14 00:46:42.211125 kubelet[1573]: E0514 00:46:42.211011 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s" May 14 00:46:42.377944 kubelet[1573]: I0514 00:46:42.377916 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:46:42.859706 kubelet[1573]: E0514 00:46:42.859678 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:42.861503 kubelet[1573]: E0514 00:46:42.861477 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:42.863601 kubelet[1573]: E0514 00:46:42.863583 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:43.544973 kubelet[1573]: I0514 00:46:43.544927 1573 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 00:46:43.795846 kubelet[1573]: I0514 00:46:43.795741 1573 apiserver.go:52] "Watching apiserver" May 14 00:46:43.805609 kubelet[1573]: I0514 00:46:43.805588 1573 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 00:46:43.869493 kubelet[1573]: E0514 00:46:43.869460 1573 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 14 00:46:43.869493 kubelet[1573]: E0514 00:46:43.869478 1573 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 14 00:46:43.869638 kubelet[1573]: E0514 00:46:43.869614 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:43.869694 kubelet[1573]: E0514 00:46:43.869675 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:45.690930 systemd[1]: Reloading. May 14 00:46:45.742141 /usr/lib/systemd/system-generators/torcx-generator[1872]: time="2025-05-14T00:46:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:46:45.742568 /usr/lib/systemd/system-generators/torcx-generator[1872]: time="2025-05-14T00:46:45Z" level=info msg="torcx already run" May 14 00:46:45.801780 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:46:45.801962 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:46:45.818019 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:46:45.901033 kubelet[1573]: I0514 00:46:45.900989 1573 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:46:45.902393 systemd[1]: Stopping kubelet.service... May 14 00:46:45.924715 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:46:45.924897 systemd[1]: Stopped kubelet.service. May 14 00:46:45.924942 systemd[1]: kubelet.service: Consumed 1.437s CPU time. May 14 00:46:45.926459 systemd[1]: Starting kubelet.service... May 14 00:46:46.013894 systemd[1]: Started kubelet.service. May 14 00:46:46.048426 kubelet[1912]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:46:46.048426 kubelet[1912]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:46:46.048426 kubelet[1912]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:46:46.048855 kubelet[1912]: I0514 00:46:46.048472 1912 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:46:46.054020 kubelet[1912]: I0514 00:46:46.053982 1912 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 00:46:46.054150 kubelet[1912]: I0514 00:46:46.054136 1912 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:46:46.054446 kubelet[1912]: I0514 00:46:46.054424 1912 server.go:929] "Client rotation is on, will bootstrap in background" May 14 00:46:46.055825 kubelet[1912]: I0514 00:46:46.055798 1912 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:46:46.057770 kubelet[1912]: I0514 00:46:46.057730 1912 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:46:46.060618 kubelet[1912]: E0514 00:46:46.060586 1912 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 00:46:46.060618 kubelet[1912]: I0514 00:46:46.060613 1912 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 00:46:46.064294 kubelet[1912]: I0514 00:46:46.064269 1912 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:46:46.064423 kubelet[1912]: I0514 00:46:46.064410 1912 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 00:46:46.064537 kubelet[1912]: I0514 00:46:46.064516 1912 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:46:46.064676 kubelet[1912]: I0514 00:46:46.064536 1912 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:46:46.064754 kubelet[1912]: I0514 00:46:46.064688 1912 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:46:46.064754 kubelet[1912]: I0514 00:46:46.064697 1912 container_manager_linux.go:300] "Creating device plugin manager" May 14 00:46:46.064754 kubelet[1912]: I0514 00:46:46.064724 1912 state_mem.go:36] "Initialized new in-memory state store" May 14 00:46:46.064818 kubelet[1912]: I0514 00:46:46.064813 1912 kubelet.go:408] "Attempting to sync node with API server" May 14 00:46:46.064850 kubelet[1912]: I0514 00:46:46.064824 1912 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:46:46.064850 kubelet[1912]: I0514 00:46:46.064844 1912 kubelet.go:314] "Adding apiserver pod source" May 14 00:46:46.064891 kubelet[1912]: I0514 00:46:46.064854 1912 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:46:46.069385 kubelet[1912]: I0514 00:46:46.069362 1912 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:46:46.069918 kubelet[1912]: I0514 00:46:46.069895 1912 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:46:46.070514 kubelet[1912]: I0514 00:46:46.070497 1912 server.go:1269] "Started kubelet" May 14 00:46:46.072393 kubelet[1912]: I0514 00:46:46.072370 1912 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:46:46.077465 kubelet[1912]: I0514 00:46:46.077436 1912 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:46:46.078473 kubelet[1912]: I0514 00:46:46.078449 1912 server.go:460] "Adding debug handlers to kubelet server" May 14 00:46:46.079390 kubelet[1912]: I0514 00:46:46.079342 1912 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:46:46.080830 kubelet[1912]: I0514 00:46:46.080809 1912 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:46:46.081109 kubelet[1912]: I0514 00:46:46.081086 1912 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:46:46.082181 kubelet[1912]: I0514 00:46:46.082164 1912 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 00:46:46.082438 kubelet[1912]: E0514 00:46:46.082412 1912 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:46:46.084005 kubelet[1912]: I0514 00:46:46.083961 1912 reconciler.go:26] "Reconciler: start to sync state" May 14 00:46:46.084083 kubelet[1912]: I0514 00:46:46.084028 1912 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 00:46:46.092614 kubelet[1912]: I0514 00:46:46.092092 1912 factory.go:221] Registration of the systemd container factory successfully May 14 00:46:46.092614 kubelet[1912]: I0514 00:46:46.092199 1912 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:46:46.097130 kubelet[1912]: I0514 00:46:46.095096 1912 factory.go:221] Registration of the containerd container factory successfully May 14 00:46:46.097130 kubelet[1912]: E0514 00:46:46.095979 1912 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:46:46.099569 kubelet[1912]: I0514 00:46:46.099533 1912 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:46:46.100679 kubelet[1912]: I0514 00:46:46.100609 1912 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:46:46.100679 kubelet[1912]: I0514 00:46:46.100633 1912 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:46:46.100679 kubelet[1912]: I0514 00:46:46.100650 1912 kubelet.go:2321] "Starting kubelet main sync loop" May 14 00:46:46.100795 kubelet[1912]: E0514 00:46:46.100687 1912 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:46:46.129801 kubelet[1912]: I0514 00:46:46.129773 1912 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:46:46.129801 kubelet[1912]: I0514 00:46:46.129793 1912 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:46:46.129985 kubelet[1912]: I0514 00:46:46.129814 1912 state_mem.go:36] "Initialized new in-memory state store" May 14 00:46:46.129985 kubelet[1912]: I0514 00:46:46.129950 1912 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:46:46.129985 kubelet[1912]: I0514 00:46:46.129961 1912 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:46:46.129985 kubelet[1912]: I0514 00:46:46.129978 1912 policy_none.go:49] "None policy: Start" May 14 00:46:46.130563 kubelet[1912]: I0514 00:46:46.130546 1912 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:46:46.130600 kubelet[1912]: I0514 00:46:46.130571 1912 state_mem.go:35] "Initializing new in-memory state store" May 14 00:46:46.130722 kubelet[1912]: I0514 00:46:46.130707 1912 state_mem.go:75] "Updated machine memory state" May 14 00:46:46.134245 kubelet[1912]: I0514 00:46:46.134220 1912 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:46:46.134713 kubelet[1912]: I0514 00:46:46.134693 1912 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:46:46.134828 kubelet[1912]: I0514 00:46:46.134794 1912 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:46:46.135119 kubelet[1912]: I0514 00:46:46.135008 1912 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:46:46.238192 kubelet[1912]: I0514 00:46:46.238153 1912 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 00:46:46.246185 kubelet[1912]: I0514 00:46:46.246151 1912 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 00:46:46.246311 kubelet[1912]: I0514 00:46:46.246238 1912 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 00:46:46.285345 kubelet[1912]: I0514 00:46:46.285213 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:46:46.285345 kubelet[1912]: I0514 00:46:46.285260 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:46:46.285345 kubelet[1912]: I0514 00:46:46.285280 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:46:46.285345 kubelet[1912]: I0514 00:46:46.285302 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58d0cf11abf6bff783769d6cfc792e41-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"58d0cf11abf6bff783769d6cfc792e41\") " pod="kube-system/kube-apiserver-localhost" May 14 00:46:46.285345 kubelet[1912]: I0514 00:46:46.285319 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58d0cf11abf6bff783769d6cfc792e41-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"58d0cf11abf6bff783769d6cfc792e41\") " pod="kube-system/kube-apiserver-localhost" May 14 00:46:46.286547 kubelet[1912]: I0514 00:46:46.285333 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:46:46.286547 kubelet[1912]: I0514 00:46:46.286436 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:46:46.286547 kubelet[1912]: I0514 00:46:46.286453 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 00:46:46.286547 kubelet[1912]: I0514 00:46:46.286494 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58d0cf11abf6bff783769d6cfc792e41-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"58d0cf11abf6bff783769d6cfc792e41\") " pod="kube-system/kube-apiserver-localhost" May 14 00:46:46.509491 kubelet[1912]: E0514 00:46:46.509445 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:46.509632 kubelet[1912]: E0514 00:46:46.509453 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:46.509787 kubelet[1912]: E0514 00:46:46.509766 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:46.690980 sudo[1946]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 00:46:46.691205 sudo[1946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 14 00:46:47.065327 kubelet[1912]: I0514 00:46:47.065272 1912 apiserver.go:52] "Watching apiserver" May 14 00:46:47.084512 kubelet[1912]: I0514 00:46:47.084482 1912 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 00:46:47.117088 kubelet[1912]: E0514 00:46:47.117053 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:47.117725 kubelet[1912]: E0514 00:46:47.117692 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:47.117948 kubelet[1912]: E0514 00:46:47.117924 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:47.143922 kubelet[1912]: I0514 00:46:47.143848 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.143833476 podStartE2EDuration="1.143833476s" podCreationTimestamp="2025-05-14 00:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:46:47.137732068 +0000 UTC m=+1.120396186" watchObservedRunningTime="2025-05-14 00:46:47.143833476 +0000 UTC m=+1.126497594" May 14 00:46:47.152045 kubelet[1912]: I0514 00:46:47.151987 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.151975338 podStartE2EDuration="1.151975338s" podCreationTimestamp="2025-05-14 00:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:46:47.151321167 +0000 UTC m=+1.133985285" watchObservedRunningTime="2025-05-14 00:46:47.151975338 +0000 UTC m=+1.134639456" May 14 00:46:47.152159 kubelet[1912]: I0514 00:46:47.152070 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.152065157 podStartE2EDuration="1.152065157s" podCreationTimestamp="2025-05-14 00:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:46:47.144417063 +0000 UTC m=+1.127081141" watchObservedRunningTime="2025-05-14 00:46:47.152065157 +0000 UTC m=+1.134729275" May 14 00:46:47.169000 sudo[1946]: pam_unix(sudo:session): session closed for user root May 14 00:46:48.118116 kubelet[1912]: E0514 00:46:48.118066 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:48.899593 sudo[1313]: pam_unix(sudo:session): session closed for user root May 14 00:46:48.901363 sshd[1309]: pam_unix(sshd:session): session closed for user core May 14 00:46:48.904061 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:43212.service: Deactivated successfully. May 14 00:46:48.904865 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:46:48.905030 systemd[1]: session-5.scope: Consumed 6.400s CPU time. May 14 00:46:48.905671 systemd-logind[1202]: Session 5 logged out. Waiting for processes to exit. May 14 00:46:48.906426 systemd-logind[1202]: Removed session 5. May 14 00:46:50.875743 kubelet[1912]: E0514 00:46:50.875705 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:50.951092 kubelet[1912]: E0514 00:46:50.951002 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:51.152891 kubelet[1912]: I0514 00:46:51.152807 1912 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:46:51.153356 env[1213]: time="2025-05-14T00:46:51.153321929Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:46:51.153889 kubelet[1912]: I0514 00:46:51.153845 1912 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:46:52.149741 systemd[1]: Created slice kubepods-besteffort-podaa7d4a9c_45fc_481e_92b9_a66ccbccf175.slice. May 14 00:46:52.153335 kubelet[1912]: W0514 00:46:52.153296 1912 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 14 00:46:52.153579 kubelet[1912]: E0514 00:46:52.153337 1912 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 14 00:46:52.157213 systemd[1]: Created slice kubepods-burstable-pod51df70e7_881a_49a3_9802_4799eae1e484.slice. May 14 00:46:52.226449 kubelet[1912]: I0514 00:46:52.226414 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51df70e7-881a-49a3-9802-4799eae1e484-clustermesh-secrets\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226449 kubelet[1912]: I0514 00:46:52.226456 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tffd9\" (UniqueName: \"kubernetes.io/projected/aa7d4a9c-45fc-481e-92b9-a66ccbccf175-kube-api-access-tffd9\") pod \"kube-proxy-rhgrk\" (UID: \"aa7d4a9c-45fc-481e-92b9-a66ccbccf175\") " pod="kube-system/kube-proxy-rhgrk" May 14 00:46:52.226617 kubelet[1912]: I0514 00:46:52.226478 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cilium-run\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226617 kubelet[1912]: I0514 00:46:52.226493 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-host-proc-sys-net\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226617 kubelet[1912]: I0514 00:46:52.226508 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-xtables-lock\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226617 kubelet[1912]: I0514 00:46:52.226522 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5t8t\" (UniqueName: \"kubernetes.io/projected/51df70e7-881a-49a3-9802-4799eae1e484-kube-api-access-z5t8t\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226617 kubelet[1912]: I0514 00:46:52.226536 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51df70e7-881a-49a3-9802-4799eae1e484-cilium-config-path\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226732 kubelet[1912]: I0514 00:46:52.226551 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa7d4a9c-45fc-481e-92b9-a66ccbccf175-lib-modules\") pod \"kube-proxy-rhgrk\" (UID: \"aa7d4a9c-45fc-481e-92b9-a66ccbccf175\") " pod="kube-system/kube-proxy-rhgrk" May 14 00:46:52.226732 kubelet[1912]: I0514 00:46:52.226566 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cilium-cgroup\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226732 kubelet[1912]: I0514 00:46:52.226594 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cni-path\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226732 kubelet[1912]: I0514 00:46:52.226610 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51df70e7-881a-49a3-9802-4799eae1e484-hubble-tls\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226732 kubelet[1912]: I0514 00:46:52.226624 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa7d4a9c-45fc-481e-92b9-a66ccbccf175-kube-proxy\") pod \"kube-proxy-rhgrk\" (UID: \"aa7d4a9c-45fc-481e-92b9-a66ccbccf175\") " pod="kube-system/kube-proxy-rhgrk" May 14 00:46:52.226732 kubelet[1912]: I0514 00:46:52.226639 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-bpf-maps\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226860 kubelet[1912]: I0514 00:46:52.226653 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-host-proc-sys-kernel\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226860 kubelet[1912]: I0514 00:46:52.226675 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-hostproc\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226860 kubelet[1912]: I0514 00:46:52.226690 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-lib-modules\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.226860 kubelet[1912]: I0514 00:46:52.226704 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa7d4a9c-45fc-481e-92b9-a66ccbccf175-xtables-lock\") pod \"kube-proxy-rhgrk\" (UID: \"aa7d4a9c-45fc-481e-92b9-a66ccbccf175\") " pod="kube-system/kube-proxy-rhgrk" May 14 00:46:52.226860 kubelet[1912]: I0514 00:46:52.226720 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-etc-cni-netd\") pod \"cilium-vj6n9\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " pod="kube-system/cilium-vj6n9" May 14 00:46:52.276067 systemd[1]: Created slice kubepods-besteffort-poda66d07d7_1b8d_4c84_9dd4_07cb969c76df.slice. May 14 00:46:52.327936 kubelet[1912]: I0514 00:46:52.327898 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a66d07d7-1b8d-4c84-9dd4-07cb969c76df-cilium-config-path\") pod \"cilium-operator-5d85765b45-46kdf\" (UID: \"a66d07d7-1b8d-4c84-9dd4-07cb969c76df\") " pod="kube-system/cilium-operator-5d85765b45-46kdf" May 14 00:46:52.328049 kubelet[1912]: I0514 00:46:52.328024 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sks6p\" (UniqueName: \"kubernetes.io/projected/a66d07d7-1b8d-4c84-9dd4-07cb969c76df-kube-api-access-sks6p\") pod \"cilium-operator-5d85765b45-46kdf\" (UID: \"a66d07d7-1b8d-4c84-9dd4-07cb969c76df\") " pod="kube-system/cilium-operator-5d85765b45-46kdf" May 14 00:46:52.328941 kubelet[1912]: I0514 00:46:52.328917 1912 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 14 00:46:52.455490 kubelet[1912]: E0514 00:46:52.455452 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:52.456122 env[1213]: time="2025-05-14T00:46:52.456073970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhgrk,Uid:aa7d4a9c-45fc-481e-92b9-a66ccbccf175,Namespace:kube-system,Attempt:0,}" May 14 00:46:52.469670 env[1213]: time="2025-05-14T00:46:52.469596326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:46:52.469670 env[1213]: time="2025-05-14T00:46:52.469636683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:46:52.469670 env[1213]: time="2025-05-14T00:46:52.469647122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:46:52.469996 env[1213]: time="2025-05-14T00:46:52.469963020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca0e00fa95a716dbb82735553f2de0184c55cf96771539f38a7c67b225d87033 pid=2008 runtime=io.containerd.runc.v2 May 14 00:46:52.480534 systemd[1]: Started cri-containerd-ca0e00fa95a716dbb82735553f2de0184c55cf96771539f38a7c67b225d87033.scope. May 14 00:46:52.514545 env[1213]: time="2025-05-14T00:46:52.514505176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhgrk,Uid:aa7d4a9c-45fc-481e-92b9-a66ccbccf175,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca0e00fa95a716dbb82735553f2de0184c55cf96771539f38a7c67b225d87033\"" May 14 00:46:52.515814 kubelet[1912]: E0514 00:46:52.515551 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:52.519276 env[1213]: time="2025-05-14T00:46:52.518869558Z" level=info msg="CreateContainer within sandbox \"ca0e00fa95a716dbb82735553f2de0184c55cf96771539f38a7c67b225d87033\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:46:52.531071 env[1213]: time="2025-05-14T00:46:52.531024847Z" level=info msg="CreateContainer within sandbox \"ca0e00fa95a716dbb82735553f2de0184c55cf96771539f38a7c67b225d87033\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"415feb274080ece35588a047456aa1f3304c4b163f764e39b8f61cd436f3fbde\"" May 14 00:46:52.531542 env[1213]: time="2025-05-14T00:46:52.531517254Z" level=info msg="StartContainer for \"415feb274080ece35588a047456aa1f3304c4b163f764e39b8f61cd436f3fbde\"" May 14 00:46:52.546380 systemd[1]: Started cri-containerd-415feb274080ece35588a047456aa1f3304c4b163f764e39b8f61cd436f3fbde.scope. May 14 00:46:52.579163 kubelet[1912]: E0514 00:46:52.578904 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:52.580584 env[1213]: time="2025-05-14T00:46:52.580538343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-46kdf,Uid:a66d07d7-1b8d-4c84-9dd4-07cb969c76df,Namespace:kube-system,Attempt:0,}" May 14 00:46:52.585671 env[1213]: time="2025-05-14T00:46:52.585601117Z" level=info msg="StartContainer for \"415feb274080ece35588a047456aa1f3304c4b163f764e39b8f61cd436f3fbde\" returns successfully" May 14 00:46:52.597351 env[1213]: time="2025-05-14T00:46:52.596892346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:46:52.597351 env[1213]: time="2025-05-14T00:46:52.596935263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:46:52.597351 env[1213]: time="2025-05-14T00:46:52.596946342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:46:52.597351 env[1213]: time="2025-05-14T00:46:52.597086852Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9 pid=2084 runtime=io.containerd.runc.v2 May 14 00:46:52.611752 systemd[1]: Started cri-containerd-a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9.scope. May 14 00:46:52.659127 env[1213]: time="2025-05-14T00:46:52.659072976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-46kdf,Uid:a66d07d7-1b8d-4c84-9dd4-07cb969c76df,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9\"" May 14 00:46:52.660699 kubelet[1912]: E0514 00:46:52.660660 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:52.662198 env[1213]: time="2025-05-14T00:46:52.662152086Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 00:46:53.129117 kubelet[1912]: E0514 00:46:53.128823 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:53.329213 kubelet[1912]: E0514 00:46:53.329169 1912 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 14 00:46:53.329557 kubelet[1912]: E0514 00:46:53.329299 1912 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51df70e7-881a-49a3-9802-4799eae1e484-clustermesh-secrets podName:51df70e7-881a-49a3-9802-4799eae1e484 nodeName:}" failed. No retries permitted until 2025-05-14 00:46:53.829270899 +0000 UTC m=+7.811935017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/51df70e7-881a-49a3-9802-4799eae1e484-clustermesh-secrets") pod "cilium-vj6n9" (UID: "51df70e7-881a-49a3-9802-4799eae1e484") : failed to sync secret cache: timed out waiting for the condition May 14 00:46:53.959986 kubelet[1912]: E0514 00:46:53.959856 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:53.960431 env[1213]: time="2025-05-14T00:46:53.960357529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vj6n9,Uid:51df70e7-881a-49a3-9802-4799eae1e484,Namespace:kube-system,Attempt:0,}" May 14 00:46:53.985403 env[1213]: time="2025-05-14T00:46:53.985182644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:46:53.985403 env[1213]: time="2025-05-14T00:46:53.985220001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:46:53.985403 env[1213]: time="2025-05-14T00:46:53.985230601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:46:53.985562 env[1213]: time="2025-05-14T00:46:53.985397030Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45 pid=2257 runtime=io.containerd.runc.v2 May 14 00:46:54.003188 systemd[1]: Started cri-containerd-83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45.scope. May 14 00:46:54.034242 env[1213]: time="2025-05-14T00:46:54.034193988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vj6n9,Uid:51df70e7-881a-49a3-9802-4799eae1e484,Namespace:kube-system,Attempt:0,} returns sandbox id \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\"" May 14 00:46:54.035435 kubelet[1912]: E0514 00:46:54.034930 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:54.684358 env[1213]: time="2025-05-14T00:46:54.684307148Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:54.685654 env[1213]: time="2025-05-14T00:46:54.685624588Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:54.687449 env[1213]: time="2025-05-14T00:46:54.687407639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:46:54.687910 env[1213]: time="2025-05-14T00:46:54.687870370Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 00:46:54.689639 env[1213]: time="2025-05-14T00:46:54.689612184Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 00:46:54.690398 env[1213]: time="2025-05-14T00:46:54.690368537Z" level=info msg="CreateContainer within sandbox \"a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 00:46:54.699754 env[1213]: time="2025-05-14T00:46:54.699717765Z" level=info msg="CreateContainer within sandbox \"a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\"" May 14 00:46:54.701082 env[1213]: time="2025-05-14T00:46:54.700332287Z" level=info msg="StartContainer for \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\"" May 14 00:46:54.715598 systemd[1]: Started cri-containerd-ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c.scope. May 14 00:46:54.748659 env[1213]: time="2025-05-14T00:46:54.748620611Z" level=info msg="StartContainer for \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\" returns successfully" May 14 00:46:55.140272 kubelet[1912]: E0514 00:46:55.139340 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:55.152583 kubelet[1912]: I0514 00:46:55.152512 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rhgrk" podStartSLOduration=3.152494255 podStartE2EDuration="3.152494255s" podCreationTimestamp="2025-05-14 00:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:46:53.137374708 +0000 UTC m=+7.120038826" watchObservedRunningTime="2025-05-14 00:46:55.152494255 +0000 UTC m=+9.135158373" May 14 00:46:55.152783 kubelet[1912]: I0514 00:46:55.152755 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-46kdf" podStartSLOduration=1.125500453 podStartE2EDuration="3.15274896s" podCreationTimestamp="2025-05-14 00:46:52 +0000 UTC" firstStartedPulling="2025-05-14 00:46:52.661680918 +0000 UTC m=+6.644345036" lastFinishedPulling="2025-05-14 00:46:54.688929425 +0000 UTC m=+8.671593543" observedRunningTime="2025-05-14 00:46:55.152054681 +0000 UTC m=+9.134718759" watchObservedRunningTime="2025-05-14 00:46:55.15274896 +0000 UTC m=+9.135413078" May 14 00:46:55.607901 kubelet[1912]: E0514 00:46:55.607867 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:56.140885 kubelet[1912]: E0514 00:46:56.140663 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:56.140885 kubelet[1912]: E0514 00:46:56.140437 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:57.149180 kubelet[1912]: E0514 00:46:57.147481 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:46:58.812197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98474735.mount: Deactivated successfully. May 14 00:47:00.892365 kubelet[1912]: E0514 00:47:00.892331 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:00.970837 kubelet[1912]: E0514 00:47:00.970797 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:01.019336 update_engine[1204]: I0514 00:47:01.019292 1204 update_attempter.cc:509] Updating boot flags... May 14 00:47:01.248155 env[1213]: time="2025-05-14T00:47:01.248110764Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:47:01.249472 env[1213]: time="2025-05-14T00:47:01.249443987Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:47:01.251158 env[1213]: time="2025-05-14T00:47:01.251134075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:47:01.251712 env[1213]: time="2025-05-14T00:47:01.251682852Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 00:47:01.255575 env[1213]: time="2025-05-14T00:47:01.255534049Z" level=info msg="CreateContainer within sandbox \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:47:01.265396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819722978.mount: Deactivated successfully. May 14 00:47:01.268980 env[1213]: time="2025-05-14T00:47:01.268939360Z" level=info msg="CreateContainer within sandbox \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161\"" May 14 00:47:01.269366 env[1213]: time="2025-05-14T00:47:01.269345063Z" level=info msg="StartContainer for \"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161\"" May 14 00:47:01.286433 systemd[1]: Started cri-containerd-2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161.scope. May 14 00:47:01.324826 env[1213]: time="2025-05-14T00:47:01.324780631Z" level=info msg="StartContainer for \"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161\" returns successfully" May 14 00:47:01.387946 systemd[1]: cri-containerd-2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161.scope: Deactivated successfully. May 14 00:47:01.497445 env[1213]: time="2025-05-14T00:47:01.497394028Z" level=info msg="shim disconnected" id=2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161 May 14 00:47:01.497445 env[1213]: time="2025-05-14T00:47:01.497440786Z" level=warning msg="cleaning up after shim disconnected" id=2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161 namespace=k8s.io May 14 00:47:01.497445 env[1213]: time="2025-05-14T00:47:01.497450625Z" level=info msg="cleaning up dead shim" May 14 00:47:01.504143 env[1213]: time="2025-05-14T00:47:01.504045905Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:47:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2394 runtime=io.containerd.runc.v2\n" May 14 00:47:02.155599 kubelet[1912]: E0514 00:47:02.155567 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:02.158730 env[1213]: time="2025-05-14T00:47:02.158677697Z" level=info msg="CreateContainer within sandbox \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:47:02.181618 env[1213]: time="2025-05-14T00:47:02.181564653Z" level=info msg="CreateContainer within sandbox \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5\"" May 14 00:47:02.183105 env[1213]: time="2025-05-14T00:47:02.182285104Z" level=info msg="StartContainer for \"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5\"" May 14 00:47:02.196049 systemd[1]: Started cri-containerd-2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5.scope. May 14 00:47:02.242678 env[1213]: time="2025-05-14T00:47:02.242636748Z" level=info msg="StartContainer for \"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5\" returns successfully" May 14 00:47:02.252591 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:47:02.252784 systemd[1]: Stopped systemd-sysctl.service. May 14 00:47:02.253474 systemd[1]: Stopping systemd-sysctl.service... May 14 00:47:02.254898 systemd[1]: Starting systemd-sysctl.service... May 14 00:47:02.255835 systemd[1]: cri-containerd-2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5.scope: Deactivated successfully. May 14 00:47:02.264326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161-rootfs.mount: Deactivated successfully. May 14 00:47:02.265356 systemd[1]: Finished systemd-sysctl.service. May 14 00:47:02.271905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5-rootfs.mount: Deactivated successfully. May 14 00:47:02.278610 env[1213]: time="2025-05-14T00:47:02.278568297Z" level=info msg="shim disconnected" id=2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5 May 14 00:47:02.279027 env[1213]: time="2025-05-14T00:47:02.278995560Z" level=warning msg="cleaning up after shim disconnected" id=2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5 namespace=k8s.io May 14 00:47:02.279104 env[1213]: time="2025-05-14T00:47:02.279090796Z" level=info msg="cleaning up dead shim" May 14 00:47:02.285873 env[1213]: time="2025-05-14T00:47:02.285846963Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:47:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2460 runtime=io.containerd.runc.v2\n" May 14 00:47:03.158683 kubelet[1912]: E0514 00:47:03.158652 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:03.162383 env[1213]: time="2025-05-14T00:47:03.162334005Z" level=info msg="CreateContainer within sandbox \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:47:03.187799 env[1213]: time="2025-05-14T00:47:03.187743948Z" level=info msg="CreateContainer within sandbox \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c\"" May 14 00:47:03.188676 env[1213]: time="2025-05-14T00:47:03.188643193Z" level=info msg="StartContainer for \"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c\"" May 14 00:47:03.216715 systemd[1]: Started cri-containerd-212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c.scope. May 14 00:47:03.257406 env[1213]: time="2025-05-14T00:47:03.257354352Z" level=info msg="StartContainer for \"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c\" returns successfully" May 14 00:47:03.266840 systemd[1]: cri-containerd-212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c.scope: Deactivated successfully. May 14 00:47:03.284722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c-rootfs.mount: Deactivated successfully. May 14 00:47:03.288098 env[1213]: time="2025-05-14T00:47:03.288050852Z" level=info msg="shim disconnected" id=212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c May 14 00:47:03.288098 env[1213]: time="2025-05-14T00:47:03.288094530Z" level=warning msg="cleaning up after shim disconnected" id=212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c namespace=k8s.io May 14 00:47:03.288397 env[1213]: time="2025-05-14T00:47:03.288104170Z" level=info msg="cleaning up dead shim" May 14 00:47:03.294223 env[1213]: time="2025-05-14T00:47:03.294174097Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:47:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2517 runtime=io.containerd.runc.v2\n" May 14 00:47:04.163453 kubelet[1912]: E0514 00:47:04.163422 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:04.168614 env[1213]: time="2025-05-14T00:47:04.168575308Z" level=info msg="CreateContainer within sandbox \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:47:04.184695 env[1213]: time="2025-05-14T00:47:04.184640879Z" level=info msg="CreateContainer within sandbox \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8\"" May 14 00:47:04.185178 env[1213]: time="2025-05-14T00:47:04.185146621Z" level=info msg="StartContainer for \"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8\"" May 14 00:47:04.200811 systemd[1]: Started cri-containerd-01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8.scope. May 14 00:47:04.246311 env[1213]: time="2025-05-14T00:47:04.246259982Z" level=info msg="StartContainer for \"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8\" returns successfully" May 14 00:47:04.247622 systemd[1]: cri-containerd-01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8.scope: Deactivated successfully. May 14 00:47:04.265333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8-rootfs.mount: Deactivated successfully. May 14 00:47:04.265668 env[1213]: time="2025-05-14T00:47:04.265614113Z" level=info msg="shim disconnected" id=01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8 May 14 00:47:04.265790 env[1213]: time="2025-05-14T00:47:04.265774027Z" level=warning msg="cleaning up after shim disconnected" id=01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8 namespace=k8s.io May 14 00:47:04.265860 env[1213]: time="2025-05-14T00:47:04.265847864Z" level=info msg="cleaning up dead shim" May 14 00:47:04.272233 env[1213]: time="2025-05-14T00:47:04.272199432Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:47:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2572 runtime=io.containerd.runc.v2\n" May 14 00:47:05.166893 kubelet[1912]: E0514 00:47:05.166849 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:05.168782 env[1213]: time="2025-05-14T00:47:05.168741835Z" level=info msg="CreateContainer within sandbox \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:47:05.182237 env[1213]: time="2025-05-14T00:47:05.182198684Z" level=info msg="CreateContainer within sandbox \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\"" May 14 00:47:05.182920 env[1213]: time="2025-05-14T00:47:05.182888900Z" level=info msg="StartContainer for \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\"" May 14 00:47:05.201129 systemd[1]: Started cri-containerd-b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf.scope. May 14 00:47:05.256308 env[1213]: time="2025-05-14T00:47:05.256258017Z" level=info msg="StartContainer for \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\" returns successfully" May 14 00:47:05.265183 systemd[1]: run-containerd-runc-k8s.io-b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf-runc.K5G006.mount: Deactivated successfully. May 14 00:47:05.411094 kubelet[1912]: I0514 00:47:05.411057 1912 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 00:47:05.436920 systemd[1]: Created slice kubepods-burstable-pod740d84db_9806_4230_87e8_84bb933dad58.slice. May 14 00:47:05.441586 systemd[1]: Created slice kubepods-burstable-pod24f334f2_e8cc_45f3_9338_cd0839734590.slice. May 14 00:47:05.537382 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 14 00:47:05.628903 kubelet[1912]: I0514 00:47:05.628824 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbc66\" (UniqueName: \"kubernetes.io/projected/740d84db-9806-4230-87e8-84bb933dad58-kube-api-access-zbc66\") pod \"coredns-6f6b679f8f-xczcj\" (UID: \"740d84db-9806-4230-87e8-84bb933dad58\") " pod="kube-system/coredns-6f6b679f8f-xczcj" May 14 00:47:05.628903 kubelet[1912]: I0514 00:47:05.628900 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24f334f2-e8cc-45f3-9338-cd0839734590-config-volume\") pod \"coredns-6f6b679f8f-v26l2\" (UID: \"24f334f2-e8cc-45f3-9338-cd0839734590\") " pod="kube-system/coredns-6f6b679f8f-v26l2" May 14 00:47:05.629135 kubelet[1912]: I0514 00:47:05.628931 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/740d84db-9806-4230-87e8-84bb933dad58-config-volume\") pod \"coredns-6f6b679f8f-xczcj\" (UID: \"740d84db-9806-4230-87e8-84bb933dad58\") " pod="kube-system/coredns-6f6b679f8f-xczcj" May 14 00:47:05.629135 kubelet[1912]: I0514 00:47:05.628957 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fp9h\" (UniqueName: \"kubernetes.io/projected/24f334f2-e8cc-45f3-9338-cd0839734590-kube-api-access-4fp9h\") pod \"coredns-6f6b679f8f-v26l2\" (UID: \"24f334f2-e8cc-45f3-9338-cd0839734590\") " pod="kube-system/coredns-6f6b679f8f-v26l2" May 14 00:47:05.779280 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 14 00:47:06.040596 kubelet[1912]: E0514 00:47:06.040484 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:06.041450 env[1213]: time="2025-05-14T00:47:06.041088099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xczcj,Uid:740d84db-9806-4230-87e8-84bb933dad58,Namespace:kube-system,Attempt:0,}" May 14 00:47:06.044485 kubelet[1912]: E0514 00:47:06.044466 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:06.044961 env[1213]: time="2025-05-14T00:47:06.044933611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v26l2,Uid:24f334f2-e8cc-45f3-9338-cd0839734590,Namespace:kube-system,Attempt:0,}" May 14 00:47:06.170979 kubelet[1912]: E0514 00:47:06.170950 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:07.173582 kubelet[1912]: E0514 00:47:07.173541 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:07.386758 systemd-networkd[1037]: cilium_host: Link UP May 14 00:47:07.388555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 14 00:47:07.388588 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 14 00:47:07.387026 systemd-networkd[1037]: cilium_net: Link UP May 14 00:47:07.387763 systemd-networkd[1037]: cilium_net: Gained carrier May 14 00:47:07.388333 systemd-networkd[1037]: cilium_host: Gained carrier May 14 00:47:07.390494 systemd-networkd[1037]: cilium_net: Gained IPv6LL May 14 00:47:07.464135 systemd-networkd[1037]: cilium_vxlan: Link UP May 14 00:47:07.464141 systemd-networkd[1037]: cilium_vxlan: Gained carrier May 14 00:47:07.625364 systemd-networkd[1037]: cilium_host: Gained IPv6LL May 14 00:47:07.752284 kernel: NET: Registered PF_ALG protocol family May 14 00:47:08.174125 kubelet[1912]: E0514 00:47:08.174078 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:08.331184 systemd-networkd[1037]: lxc_health: Link UP May 14 00:47:08.340424 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 14 00:47:08.340123 systemd-networkd[1037]: lxc_health: Gained carrier May 14 00:47:08.618498 systemd-networkd[1037]: lxc4a408a5c88c7: Link UP May 14 00:47:08.623281 kernel: eth0: renamed from tmpa509f May 14 00:47:08.637812 systemd-networkd[1037]: lxcece0a3249a74: Link UP May 14 00:47:08.649327 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:47:08.649390 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4a408a5c88c7: link becomes ready May 14 00:47:08.649560 systemd-networkd[1037]: lxc4a408a5c88c7: Gained carrier May 14 00:47:08.651319 kernel: eth0: renamed from tmp6b107 May 14 00:47:08.657307 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcece0a3249a74: link becomes ready May 14 00:47:08.657836 systemd-networkd[1037]: lxcece0a3249a74: Gained carrier May 14 00:47:08.809451 systemd-networkd[1037]: cilium_vxlan: Gained IPv6LL May 14 00:47:09.513425 systemd-networkd[1037]: lxc_health: Gained IPv6LL May 14 00:47:09.967451 kubelet[1912]: E0514 00:47:09.967421 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:09.986118 kubelet[1912]: I0514 00:47:09.986057 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vj6n9" podStartSLOduration=10.76965722 podStartE2EDuration="17.986043007s" podCreationTimestamp="2025-05-14 00:46:52 +0000 UTC" firstStartedPulling="2025-05-14 00:46:54.036137469 +0000 UTC m=+8.018801587" lastFinishedPulling="2025-05-14 00:47:01.252523256 +0000 UTC m=+15.235187374" observedRunningTime="2025-05-14 00:47:06.186029985 +0000 UTC m=+20.168694103" watchObservedRunningTime="2025-05-14 00:47:09.986043007 +0000 UTC m=+23.968707125" May 14 00:47:10.090423 systemd-networkd[1037]: lxc4a408a5c88c7: Gained IPv6LL May 14 00:47:10.177839 kubelet[1912]: E0514 00:47:10.177800 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:10.281696 systemd-networkd[1037]: lxcece0a3249a74: Gained IPv6LL May 14 00:47:10.582659 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:59660.service. May 14 00:47:10.623450 sshd[3132]: Accepted publickey for core from 10.0.0.1 port 59660 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:10.625210 sshd[3132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:10.628764 systemd-logind[1202]: New session 6 of user core. May 14 00:47:10.629687 systemd[1]: Started session-6.scope. May 14 00:47:10.748387 sshd[3132]: pam_unix(sshd:session): session closed for user core May 14 00:47:10.750858 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:59660.service: Deactivated successfully. May 14 00:47:10.751662 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:47:10.752168 systemd-logind[1202]: Session 6 logged out. Waiting for processes to exit. May 14 00:47:10.753027 systemd-logind[1202]: Removed session 6. May 14 00:47:11.179653 kubelet[1912]: E0514 00:47:11.179616 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:12.247127 env[1213]: time="2025-05-14T00:47:12.246972706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:47:12.247127 env[1213]: time="2025-05-14T00:47:12.247005025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:47:12.247127 env[1213]: time="2025-05-14T00:47:12.247025025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:47:12.247520 env[1213]: time="2025-05-14T00:47:12.247155221Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b107a982257f3b29ad7eb6594b57d23567e32f125394cb41ca0df33a6c5bccd pid=3182 runtime=io.containerd.runc.v2 May 14 00:47:12.247690 env[1213]: time="2025-05-14T00:47:12.245413066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:47:12.247690 env[1213]: time="2025-05-14T00:47:12.246970826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:47:12.247690 env[1213]: time="2025-05-14T00:47:12.246982226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:47:12.247690 env[1213]: time="2025-05-14T00:47:12.247206660Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a509fbeb97939dceb139f846930c75b43eaa87b756528e28b7d445690b350848 pid=3167 runtime=io.containerd.runc.v2 May 14 00:47:12.263297 systemd[1]: run-containerd-runc-k8s.io-a509fbeb97939dceb139f846930c75b43eaa87b756528e28b7d445690b350848-runc.X5KTEC.mount: Deactivated successfully. May 14 00:47:12.269447 systemd[1]: Started cri-containerd-6b107a982257f3b29ad7eb6594b57d23567e32f125394cb41ca0df33a6c5bccd.scope. May 14 00:47:12.270941 systemd[1]: Started cri-containerd-a509fbeb97939dceb139f846930c75b43eaa87b756528e28b7d445690b350848.scope. May 14 00:47:12.336536 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:47:12.343922 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:47:12.352792 env[1213]: time="2025-05-14T00:47:12.352749868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xczcj,Uid:740d84db-9806-4230-87e8-84bb933dad58,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b107a982257f3b29ad7eb6594b57d23567e32f125394cb41ca0df33a6c5bccd\"" May 14 00:47:12.353375 kubelet[1912]: E0514 00:47:12.353342 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:12.356020 env[1213]: time="2025-05-14T00:47:12.355982145Z" level=info msg="CreateContainer within sandbox \"6b107a982257f3b29ad7eb6594b57d23567e32f125394cb41ca0df33a6c5bccd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:47:12.362570 env[1213]: time="2025-05-14T00:47:12.362531496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v26l2,Uid:24f334f2-e8cc-45f3-9338-cd0839734590,Namespace:kube-system,Attempt:0,} returns sandbox id \"a509fbeb97939dceb139f846930c75b43eaa87b756528e28b7d445690b350848\"" May 14 00:47:12.363082 kubelet[1912]: E0514 00:47:12.363061 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:12.364347 env[1213]: time="2025-05-14T00:47:12.364320090Z" level=info msg="CreateContainer within sandbox \"a509fbeb97939dceb139f846930c75b43eaa87b756528e28b7d445690b350848\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:47:12.453538 env[1213]: time="2025-05-14T00:47:12.453485759Z" level=info msg="CreateContainer within sandbox \"6b107a982257f3b29ad7eb6594b57d23567e32f125394cb41ca0df33a6c5bccd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc914b8a03dcf10bc1dab8c5ce990c4c8b13ef412fa269204e7e46c11395cd61\"" May 14 00:47:12.454166 env[1213]: time="2025-05-14T00:47:12.454135662Z" level=info msg="StartContainer for \"dc914b8a03dcf10bc1dab8c5ce990c4c8b13ef412fa269204e7e46c11395cd61\"" May 14 00:47:12.456213 env[1213]: time="2025-05-14T00:47:12.456175970Z" level=info msg="CreateContainer within sandbox \"a509fbeb97939dceb139f846930c75b43eaa87b756528e28b7d445690b350848\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5ca8be67a60045e8fd925af441830333645aa12c7d9888aed1e5aa1ec5cae8e\"" May 14 00:47:12.456875 env[1213]: time="2025-05-14T00:47:12.456850393Z" level=info msg="StartContainer for \"c5ca8be67a60045e8fd925af441830333645aa12c7d9888aed1e5aa1ec5cae8e\"" May 14 00:47:12.471096 systemd[1]: Started cri-containerd-c5ca8be67a60045e8fd925af441830333645aa12c7d9888aed1e5aa1ec5cae8e.scope. May 14 00:47:12.471964 systemd[1]: Started cri-containerd-dc914b8a03dcf10bc1dab8c5ce990c4c8b13ef412fa269204e7e46c11395cd61.scope. May 14 00:47:12.527106 env[1213]: time="2025-05-14T00:47:12.526987310Z" level=info msg="StartContainer for \"dc914b8a03dcf10bc1dab8c5ce990c4c8b13ef412fa269204e7e46c11395cd61\" returns successfully" May 14 00:47:12.528442 env[1213]: time="2025-05-14T00:47:12.528400034Z" level=info msg="StartContainer for \"c5ca8be67a60045e8fd925af441830333645aa12c7d9888aed1e5aa1ec5cae8e\" returns successfully" May 14 00:47:13.185191 kubelet[1912]: E0514 00:47:13.185162 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:13.186687 kubelet[1912]: E0514 00:47:13.186656 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:13.197653 kubelet[1912]: I0514 00:47:13.197602 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v26l2" podStartSLOduration=21.197590752 podStartE2EDuration="21.197590752s" podCreationTimestamp="2025-05-14 00:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:47:13.196793252 +0000 UTC m=+27.179457410" watchObservedRunningTime="2025-05-14 00:47:13.197590752 +0000 UTC m=+27.180254870" May 14 00:47:13.207421 kubelet[1912]: I0514 00:47:13.207366 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xczcj" podStartSLOduration=21.207350871 podStartE2EDuration="21.207350871s" podCreationTimestamp="2025-05-14 00:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:47:13.206401614 +0000 UTC m=+27.189065732" watchObservedRunningTime="2025-05-14 00:47:13.207350871 +0000 UTC m=+27.190014989" May 14 00:47:14.188334 kubelet[1912]: E0514 00:47:14.188304 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:14.188732 kubelet[1912]: E0514 00:47:14.188682 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:15.190196 kubelet[1912]: E0514 00:47:15.190070 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:15.190196 kubelet[1912]: E0514 00:47:15.190129 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:15.753514 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:50602.service. May 14 00:47:15.796754 sshd[3325]: Accepted publickey for core from 10.0.0.1 port 50602 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:15.796933 sshd[3325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:15.804539 systemd-logind[1202]: New session 7 of user core. May 14 00:47:15.805222 systemd[1]: Started session-7.scope. May 14 00:47:15.926203 sshd[3325]: pam_unix(sshd:session): session closed for user core May 14 00:47:15.928847 systemd-logind[1202]: Session 7 logged out. Waiting for processes to exit. May 14 00:47:15.929062 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:50602.service: Deactivated successfully. May 14 00:47:15.929764 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:47:15.930498 systemd-logind[1202]: Removed session 7. May 14 00:47:20.937137 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:50618.service. May 14 00:47:20.983717 sshd[3339]: Accepted publickey for core from 10.0.0.1 port 50618 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:20.985037 sshd[3339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:20.990830 systemd-logind[1202]: New session 8 of user core. May 14 00:47:20.995823 systemd[1]: Started session-8.scope. May 14 00:47:21.112684 sshd[3339]: pam_unix(sshd:session): session closed for user core May 14 00:47:21.115348 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:50618.service: Deactivated successfully. May 14 00:47:21.116131 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:47:21.116634 systemd-logind[1202]: Session 8 logged out. Waiting for processes to exit. May 14 00:47:21.117942 systemd-logind[1202]: Removed session 8. May 14 00:47:26.117887 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:38570.service. May 14 00:47:26.155160 sshd[3356]: Accepted publickey for core from 10.0.0.1 port 38570 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:26.156664 sshd[3356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:26.160090 systemd-logind[1202]: New session 9 of user core. May 14 00:47:26.160911 systemd[1]: Started session-9.scope. May 14 00:47:26.286242 sshd[3356]: pam_unix(sshd:session): session closed for user core May 14 00:47:26.289461 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:38570.service: Deactivated successfully. May 14 00:47:26.290134 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:47:26.290704 systemd-logind[1202]: Session 9 logged out. Waiting for processes to exit. May 14 00:47:26.292001 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:38572.service. May 14 00:47:26.292736 systemd-logind[1202]: Removed session 9. May 14 00:47:26.328408 sshd[3370]: Accepted publickey for core from 10.0.0.1 port 38572 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:26.329669 sshd[3370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:26.335584 systemd[1]: Started session-10.scope. May 14 00:47:26.336054 systemd-logind[1202]: New session 10 of user core. May 14 00:47:26.496108 sshd[3370]: pam_unix(sshd:session): session closed for user core May 14 00:47:26.500193 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:38586.service. May 14 00:47:26.504171 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:47:26.506089 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:38572.service: Deactivated successfully. May 14 00:47:26.506932 systemd-logind[1202]: Session 10 logged out. Waiting for processes to exit. May 14 00:47:26.508629 systemd-logind[1202]: Removed session 10. May 14 00:47:26.546663 sshd[3381]: Accepted publickey for core from 10.0.0.1 port 38586 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:26.547985 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:26.552172 systemd[1]: Started session-11.scope. May 14 00:47:26.552678 systemd-logind[1202]: New session 11 of user core. May 14 00:47:26.674453 sshd[3381]: pam_unix(sshd:session): session closed for user core May 14 00:47:26.676908 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:38586.service: Deactivated successfully. May 14 00:47:26.677618 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:47:26.678386 systemd-logind[1202]: Session 11 logged out. Waiting for processes to exit. May 14 00:47:26.679209 systemd-logind[1202]: Removed session 11. May 14 00:47:31.679266 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:38592.service. May 14 00:47:31.714732 sshd[3396]: Accepted publickey for core from 10.0.0.1 port 38592 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:31.716134 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:31.719732 systemd-logind[1202]: New session 12 of user core. May 14 00:47:31.720166 systemd[1]: Started session-12.scope. May 14 00:47:31.834617 sshd[3396]: pam_unix(sshd:session): session closed for user core May 14 00:47:31.836988 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:38592.service: Deactivated successfully. May 14 00:47:31.837710 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:47:31.838379 systemd-logind[1202]: Session 12 logged out. Waiting for processes to exit. May 14 00:47:31.839131 systemd-logind[1202]: Removed session 12. May 14 00:47:36.839686 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:37712.service. May 14 00:47:36.874967 sshd[3409]: Accepted publickey for core from 10.0.0.1 port 37712 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:36.876048 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:36.879652 systemd-logind[1202]: New session 13 of user core. May 14 00:47:36.880060 systemd[1]: Started session-13.scope. May 14 00:47:36.988853 sshd[3409]: pam_unix(sshd:session): session closed for user core May 14 00:47:36.993048 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:37716.service. May 14 00:47:36.993571 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:37712.service: Deactivated successfully. May 14 00:47:36.994325 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:47:36.995005 systemd-logind[1202]: Session 13 logged out. Waiting for processes to exit. May 14 00:47:36.995828 systemd-logind[1202]: Removed session 13. May 14 00:47:37.028266 sshd[3421]: Accepted publickey for core from 10.0.0.1 port 37716 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:37.029614 sshd[3421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:37.033304 systemd-logind[1202]: New session 14 of user core. May 14 00:47:37.033928 systemd[1]: Started session-14.scope. May 14 00:47:37.226799 sshd[3421]: pam_unix(sshd:session): session closed for user core May 14 00:47:37.230162 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:37732.service. May 14 00:47:37.230671 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:37716.service: Deactivated successfully. May 14 00:47:37.231344 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:47:37.231960 systemd-logind[1202]: Session 14 logged out. Waiting for processes to exit. May 14 00:47:37.232843 systemd-logind[1202]: Removed session 14. May 14 00:47:37.268219 sshd[3432]: Accepted publickey for core from 10.0.0.1 port 37732 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:37.269878 sshd[3432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:37.273535 systemd-logind[1202]: New session 15 of user core. May 14 00:47:37.273998 systemd[1]: Started session-15.scope. May 14 00:47:38.590345 sshd[3432]: pam_unix(sshd:session): session closed for user core May 14 00:47:38.593473 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:37746.service. May 14 00:47:38.596556 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:47:38.597104 systemd-logind[1202]: Session 15 logged out. Waiting for processes to exit. May 14 00:47:38.597262 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:37732.service: Deactivated successfully. May 14 00:47:38.598609 systemd-logind[1202]: Removed session 15. May 14 00:47:38.635878 sshd[3453]: Accepted publickey for core from 10.0.0.1 port 37746 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:38.637465 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:38.641655 systemd-logind[1202]: New session 16 of user core. May 14 00:47:38.642097 systemd[1]: Started session-16.scope. May 14 00:47:38.874222 sshd[3453]: pam_unix(sshd:session): session closed for user core May 14 00:47:38.877670 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:37754.service. May 14 00:47:38.880607 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:37746.service: Deactivated successfully. May 14 00:47:38.881395 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:47:38.882178 systemd-logind[1202]: Session 16 logged out. Waiting for processes to exit. May 14 00:47:38.884788 systemd-logind[1202]: Removed session 16. May 14 00:47:38.916644 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 37754 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:38.918406 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:38.922581 systemd-logind[1202]: New session 17 of user core. May 14 00:47:38.923075 systemd[1]: Started session-17.scope. May 14 00:47:39.038290 sshd[3466]: pam_unix(sshd:session): session closed for user core May 14 00:47:39.041245 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:37754.service: Deactivated successfully. May 14 00:47:39.041981 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:47:39.042580 systemd-logind[1202]: Session 17 logged out. Waiting for processes to exit. May 14 00:47:39.043243 systemd-logind[1202]: Removed session 17. May 14 00:47:44.043099 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:52670.service. May 14 00:47:44.080296 sshd[3481]: Accepted publickey for core from 10.0.0.1 port 52670 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:44.081728 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:44.085045 systemd-logind[1202]: New session 18 of user core. May 14 00:47:44.085930 systemd[1]: Started session-18.scope. May 14 00:47:44.202512 sshd[3481]: pam_unix(sshd:session): session closed for user core May 14 00:47:44.204914 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:52670.service: Deactivated successfully. May 14 00:47:44.205636 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:47:44.206174 systemd-logind[1202]: Session 18 logged out. Waiting for processes to exit. May 14 00:47:44.207104 systemd-logind[1202]: Removed session 18. May 14 00:47:49.206562 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:52674.service. May 14 00:47:49.242826 sshd[3500]: Accepted publickey for core from 10.0.0.1 port 52674 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:49.244025 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:49.247937 systemd-logind[1202]: New session 19 of user core. May 14 00:47:49.248358 systemd[1]: Started session-19.scope. May 14 00:47:49.356055 sshd[3500]: pam_unix(sshd:session): session closed for user core May 14 00:47:49.358592 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:52674.service: Deactivated successfully. May 14 00:47:49.359283 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:47:49.359787 systemd-logind[1202]: Session 19 logged out. Waiting for processes to exit. May 14 00:47:49.360486 systemd-logind[1202]: Removed session 19. May 14 00:47:54.360430 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:38932.service. May 14 00:47:54.396757 sshd[3516]: Accepted publickey for core from 10.0.0.1 port 38932 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:54.398459 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:54.401916 systemd-logind[1202]: New session 20 of user core. May 14 00:47:54.402800 systemd[1]: Started session-20.scope. May 14 00:47:54.510842 sshd[3516]: pam_unix(sshd:session): session closed for user core May 14 00:47:54.514143 systemd-logind[1202]: Session 20 logged out. Waiting for processes to exit. May 14 00:47:54.515383 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:38936.service. May 14 00:47:54.515943 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:38932.service: Deactivated successfully. May 14 00:47:54.516622 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:47:54.517328 systemd-logind[1202]: Removed session 20. May 14 00:47:54.551738 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 38936 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:54.552906 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:54.556345 systemd-logind[1202]: New session 21 of user core. May 14 00:47:54.557182 systemd[1]: Started session-21.scope. May 14 00:47:56.457683 env[1213]: time="2025-05-14T00:47:56.457548137Z" level=info msg="StopContainer for \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\" with timeout 30 (s)" May 14 00:47:56.458343 env[1213]: time="2025-05-14T00:47:56.458317050Z" level=info msg="Stop container \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\" with signal terminated" May 14 00:47:56.469239 systemd[1]: cri-containerd-ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c.scope: Deactivated successfully. May 14 00:47:56.486640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c-rootfs.mount: Deactivated successfully. May 14 00:47:56.493393 env[1213]: time="2025-05-14T00:47:56.493348526Z" level=info msg="shim disconnected" id=ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c May 14 00:47:56.493393 env[1213]: time="2025-05-14T00:47:56.493390966Z" level=warning msg="cleaning up after shim disconnected" id=ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c namespace=k8s.io May 14 00:47:56.493605 env[1213]: time="2025-05-14T00:47:56.493400485Z" level=info msg="cleaning up dead shim" May 14 00:47:56.498299 env[1213]: time="2025-05-14T00:47:56.498217441Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:47:56.502212 env[1213]: time="2025-05-14T00:47:56.502176964Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:47:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3573 runtime=io.containerd.runc.v2\n" May 14 00:47:56.504205 env[1213]: time="2025-05-14T00:47:56.504165626Z" level=info msg="StopContainer for \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\" with timeout 2 (s)" May 14 00:47:56.504478 env[1213]: time="2025-05-14T00:47:56.504437303Z" level=info msg="StopContainer for \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\" returns successfully" May 14 00:47:56.504797 env[1213]: time="2025-05-14T00:47:56.504767540Z" level=info msg="Stop container \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\" with signal terminated" May 14 00:47:56.504873 env[1213]: time="2025-05-14T00:47:56.504845980Z" level=info msg="StopPodSandbox for \"a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9\"" May 14 00:47:56.504914 env[1213]: time="2025-05-14T00:47:56.504902699Z" level=info msg="Container to stop \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:47:56.508057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9-shm.mount: Deactivated successfully. May 14 00:47:56.510972 systemd-networkd[1037]: lxc_health: Link DOWN May 14 00:47:56.510979 systemd-networkd[1037]: lxc_health: Lost carrier May 14 00:47:56.514050 systemd[1]: cri-containerd-a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9.scope: Deactivated successfully. May 14 00:47:56.531277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9-rootfs.mount: Deactivated successfully. May 14 00:47:56.538842 systemd[1]: cri-containerd-b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf.scope: Deactivated successfully. May 14 00:47:56.539148 systemd[1]: cri-containerd-b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf.scope: Consumed 6.398s CPU time. May 14 00:47:56.539822 env[1213]: time="2025-05-14T00:47:56.539773376Z" level=info msg="shim disconnected" id=a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9 May 14 00:47:56.539918 env[1213]: time="2025-05-14T00:47:56.539823936Z" level=warning msg="cleaning up after shim disconnected" id=a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9 namespace=k8s.io May 14 00:47:56.539918 env[1213]: time="2025-05-14T00:47:56.539835216Z" level=info msg="cleaning up dead shim" May 14 00:47:56.548095 env[1213]: time="2025-05-14T00:47:56.548051580Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:47:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3616 runtime=io.containerd.runc.v2\n" May 14 00:47:56.548402 env[1213]: time="2025-05-14T00:47:56.548375937Z" level=info msg="TearDown network for sandbox \"a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9\" successfully" May 14 00:47:56.548402 env[1213]: time="2025-05-14T00:47:56.548400776Z" level=info msg="StopPodSandbox for \"a1dd0f0e6966e301bfd96922c47fb0e46b8c5da3d57be1f8171955b9a61334c9\" returns successfully" May 14 00:47:56.560868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf-rootfs.mount: Deactivated successfully. May 14 00:47:56.563608 env[1213]: time="2025-05-14T00:47:56.563565596Z" level=info msg="shim disconnected" id=b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf May 14 00:47:56.563725 env[1213]: time="2025-05-14T00:47:56.563612076Z" level=warning msg="cleaning up after shim disconnected" id=b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf namespace=k8s.io May 14 00:47:56.563725 env[1213]: time="2025-05-14T00:47:56.563623995Z" level=info msg="cleaning up dead shim" May 14 00:47:56.570539 env[1213]: time="2025-05-14T00:47:56.570499852Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:47:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3642 runtime=io.containerd.runc.v2\n" May 14 00:47:56.572542 env[1213]: time="2025-05-14T00:47:56.572505713Z" level=info msg="StopContainer for \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\" returns successfully" May 14 00:47:56.573043 env[1213]: time="2025-05-14T00:47:56.573016309Z" level=info msg="StopPodSandbox for \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\"" May 14 00:47:56.573092 env[1213]: time="2025-05-14T00:47:56.573078548Z" level=info msg="Container to stop \"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:47:56.573120 env[1213]: time="2025-05-14T00:47:56.573094228Z" level=info msg="Container to stop \"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:47:56.573120 env[1213]: time="2025-05-14T00:47:56.573106028Z" level=info msg="Container to stop \"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:47:56.573172 env[1213]: time="2025-05-14T00:47:56.573117988Z" level=info msg="Container to stop \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:47:56.573172 env[1213]: time="2025-05-14T00:47:56.573128947Z" level=info msg="Container to stop \"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:47:56.578347 systemd[1]: cri-containerd-83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45.scope: Deactivated successfully. May 14 00:47:56.633100 env[1213]: time="2025-05-14T00:47:56.633052833Z" level=info msg="shim disconnected" id=83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45 May 14 00:47:56.633803 env[1213]: time="2025-05-14T00:47:56.633777066Z" level=warning msg="cleaning up after shim disconnected" id=83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45 namespace=k8s.io May 14 00:47:56.633907 env[1213]: time="2025-05-14T00:47:56.633890985Z" level=info msg="cleaning up dead shim" May 14 00:47:56.641006 env[1213]: time="2025-05-14T00:47:56.640974120Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:47:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3674 runtime=io.containerd.runc.v2\n" May 14 00:47:56.641427 env[1213]: time="2025-05-14T00:47:56.641395716Z" level=info msg="TearDown network for sandbox \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" successfully" May 14 00:47:56.641540 env[1213]: time="2025-05-14T00:47:56.641515115Z" level=info msg="StopPodSandbox for \"83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45\" returns successfully" May 14 00:47:56.746845 kubelet[1912]: I0514 00:47:56.745863 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a66d07d7-1b8d-4c84-9dd4-07cb969c76df-cilium-config-path\") pod \"a66d07d7-1b8d-4c84-9dd4-07cb969c76df\" (UID: \"a66d07d7-1b8d-4c84-9dd4-07cb969c76df\") " May 14 00:47:56.746845 kubelet[1912]: I0514 00:47:56.745912 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cilium-run\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.746845 kubelet[1912]: I0514 00:47:56.745959 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cni-path\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.746845 kubelet[1912]: I0514 00:47:56.745975 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-hostproc\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.746845 kubelet[1912]: I0514 00:47:56.745992 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-host-proc-sys-net\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.746845 kubelet[1912]: I0514 00:47:56.746044 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5t8t\" (UniqueName: \"kubernetes.io/projected/51df70e7-881a-49a3-9802-4799eae1e484-kube-api-access-z5t8t\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.747293 kubelet[1912]: I0514 00:47:56.746062 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-lib-modules\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.747293 kubelet[1912]: I0514 00:47:56.746075 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-xtables-lock\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.747293 kubelet[1912]: I0514 00:47:56.746089 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-etc-cni-netd\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.747293 kubelet[1912]: I0514 00:47:56.746110 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51df70e7-881a-49a3-9802-4799eae1e484-clustermesh-secrets\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.747293 kubelet[1912]: I0514 00:47:56.746127 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sks6p\" (UniqueName: \"kubernetes.io/projected/a66d07d7-1b8d-4c84-9dd4-07cb969c76df-kube-api-access-sks6p\") pod \"a66d07d7-1b8d-4c84-9dd4-07cb969c76df\" (UID: \"a66d07d7-1b8d-4c84-9dd4-07cb969c76df\") " May 14 00:47:56.747293 kubelet[1912]: I0514 00:47:56.746144 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51df70e7-881a-49a3-9802-4799eae1e484-cilium-config-path\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.747437 kubelet[1912]: I0514 00:47:56.746159 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-host-proc-sys-kernel\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.747437 kubelet[1912]: I0514 00:47:56.746176 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cilium-cgroup\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.747437 kubelet[1912]: I0514 00:47:56.746191 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51df70e7-881a-49a3-9802-4799eae1e484-hubble-tls\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.747437 kubelet[1912]: I0514 00:47:56.746205 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-bpf-maps\") pod \"51df70e7-881a-49a3-9802-4799eae1e484\" (UID: \"51df70e7-881a-49a3-9802-4799eae1e484\") " May 14 00:47:56.747437 kubelet[1912]: I0514 00:47:56.746985 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:47:56.747437 kubelet[1912]: I0514 00:47:56.747066 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:47:56.747578 kubelet[1912]: I0514 00:47:56.747087 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cni-path" (OuterVolumeSpecName: "cni-path") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:47:56.747578 kubelet[1912]: I0514 00:47:56.747105 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-hostproc" (OuterVolumeSpecName: "hostproc") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:47:56.747578 kubelet[1912]: I0514 00:47:56.747120 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:47:56.748274 kubelet[1912]: I0514 00:47:56.747830 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:47:56.748274 kubelet[1912]: I0514 00:47:56.747868 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:47:56.748274 kubelet[1912]: I0514 00:47:56.747886 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:47:56.749787 kubelet[1912]: I0514 00:47:56.749753 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a66d07d7-1b8d-4c84-9dd4-07cb969c76df-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a66d07d7-1b8d-4c84-9dd4-07cb969c76df" (UID: "a66d07d7-1b8d-4c84-9dd4-07cb969c76df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:47:56.749858 kubelet[1912]: I0514 00:47:56.749814 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:47:56.750202 kubelet[1912]: I0514 00:47:56.750170 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51df70e7-881a-49a3-9802-4799eae1e484-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:47:56.750275 kubelet[1912]: I0514 00:47:56.750216 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:47:56.753934 kubelet[1912]: I0514 00:47:56.753905 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66d07d7-1b8d-4c84-9dd4-07cb969c76df-kube-api-access-sks6p" (OuterVolumeSpecName: "kube-api-access-sks6p") pod "a66d07d7-1b8d-4c84-9dd4-07cb969c76df" (UID: "a66d07d7-1b8d-4c84-9dd4-07cb969c76df"). InnerVolumeSpecName "kube-api-access-sks6p". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:47:56.754012 kubelet[1912]: I0514 00:47:56.753937 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51df70e7-881a-49a3-9802-4799eae1e484-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:47:56.754012 kubelet[1912]: I0514 00:47:56.753976 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51df70e7-881a-49a3-9802-4799eae1e484-kube-api-access-z5t8t" (OuterVolumeSpecName: "kube-api-access-z5t8t") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "kube-api-access-z5t8t". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:47:56.754068 kubelet[1912]: I0514 00:47:56.754047 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51df70e7-881a-49a3-9802-4799eae1e484-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "51df70e7-881a-49a3-9802-4799eae1e484" (UID: "51df70e7-881a-49a3-9802-4799eae1e484"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:47:56.847044 kubelet[1912]: I0514 00:47:56.847001 1912 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sks6p\" (UniqueName: \"kubernetes.io/projected/a66d07d7-1b8d-4c84-9dd4-07cb969c76df-kube-api-access-sks6p\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847044 kubelet[1912]: I0514 00:47:56.847033 1912 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847044 kubelet[1912]: I0514 00:47:56.847045 1912 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51df70e7-881a-49a3-9802-4799eae1e484-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847044 kubelet[1912]: I0514 00:47:56.847054 1912 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51df70e7-881a-49a3-9802-4799eae1e484-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847287 kubelet[1912]: I0514 00:47:56.847062 1912 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847287 kubelet[1912]: I0514 00:47:56.847070 1912 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51df70e7-881a-49a3-9802-4799eae1e484-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847287 kubelet[1912]: I0514 00:47:56.847077 1912 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847287 kubelet[1912]: I0514 00:47:56.847085 1912 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847287 kubelet[1912]: I0514 00:47:56.847093 1912 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847287 kubelet[1912]: I0514 00:47:56.847100 1912 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847287 kubelet[1912]: I0514 00:47:56.847107 1912 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a66d07d7-1b8d-4c84-9dd4-07cb969c76df-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847287 kubelet[1912]: I0514 00:47:56.847123 1912 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847540 kubelet[1912]: I0514 00:47:56.847136 1912 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z5t8t\" (UniqueName: \"kubernetes.io/projected/51df70e7-881a-49a3-9802-4799eae1e484-kube-api-access-z5t8t\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847540 kubelet[1912]: I0514 00:47:56.847144 1912 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847540 kubelet[1912]: I0514 00:47:56.847151 1912 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 00:47:56.847540 kubelet[1912]: I0514 00:47:56.847159 1912 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51df70e7-881a-49a3-9802-4799eae1e484-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 00:47:57.264851 kubelet[1912]: I0514 00:47:57.264771 1912 scope.go:117] "RemoveContainer" containerID="ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c" May 14 00:47:57.266875 env[1213]: time="2025-05-14T00:47:57.266841604Z" level=info msg="RemoveContainer for \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\"" May 14 00:47:57.271787 systemd[1]: Removed slice kubepods-besteffort-poda66d07d7_1b8d_4c84_9dd4_07cb969c76df.slice. May 14 00:47:57.272754 systemd[1]: Removed slice kubepods-burstable-pod51df70e7_881a_49a3_9802_4799eae1e484.slice. May 14 00:47:57.272830 systemd[1]: kubepods-burstable-pod51df70e7_881a_49a3_9802_4799eae1e484.slice: Consumed 6.641s CPU time. May 14 00:47:57.307310 env[1213]: time="2025-05-14T00:47:57.307027472Z" level=info msg="RemoveContainer for \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\" returns successfully" May 14 00:47:57.307540 kubelet[1912]: I0514 00:47:57.307507 1912 scope.go:117] "RemoveContainer" containerID="ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c" May 14 00:47:57.307824 env[1213]: time="2025-05-14T00:47:57.307756145Z" level=error msg="ContainerStatus for \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\": not found" May 14 00:47:57.308011 kubelet[1912]: E0514 00:47:57.307990 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\": not found" containerID="ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c" May 14 00:47:57.308582 kubelet[1912]: I0514 00:47:57.308023 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c"} err="failed to get container status \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ede5ed6dfd4a9ed423a30138c83015e8c64f93b3839489c21e2a31738c4ba81c\": not found" May 14 00:47:57.308582 kubelet[1912]: I0514 00:47:57.308572 1912 scope.go:117] "RemoveContainer" containerID="b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf" May 14 00:47:57.309919 env[1213]: time="2025-05-14T00:47:57.309884686Z" level=info msg="RemoveContainer for \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\"" May 14 00:47:57.312571 env[1213]: time="2025-05-14T00:47:57.312539341Z" level=info msg="RemoveContainer for \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\" returns successfully" May 14 00:47:57.312819 kubelet[1912]: I0514 00:47:57.312800 1912 scope.go:117] "RemoveContainer" containerID="01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8" May 14 00:47:57.314252 env[1213]: time="2025-05-14T00:47:57.314209406Z" level=info msg="RemoveContainer for \"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8\"" May 14 00:47:57.316923 env[1213]: time="2025-05-14T00:47:57.316750062Z" level=info msg="RemoveContainer for \"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8\" returns successfully" May 14 00:47:57.317163 kubelet[1912]: I0514 00:47:57.317013 1912 scope.go:117] "RemoveContainer" containerID="212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c" May 14 00:47:57.318074 env[1213]: time="2025-05-14T00:47:57.318048490Z" level=info msg="RemoveContainer for \"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c\"" May 14 00:47:57.321015 env[1213]: time="2025-05-14T00:47:57.320977583Z" level=info msg="RemoveContainer for \"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c\" returns successfully" May 14 00:47:57.321177 kubelet[1912]: I0514 00:47:57.321153 1912 scope.go:117] "RemoveContainer" containerID="2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5" May 14 00:47:57.322453 env[1213]: time="2025-05-14T00:47:57.322425250Z" level=info msg="RemoveContainer for \"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5\"" May 14 00:47:57.325171 env[1213]: time="2025-05-14T00:47:57.325136464Z" level=info msg="RemoveContainer for \"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5\" returns successfully" May 14 00:47:57.325350 kubelet[1912]: I0514 00:47:57.325330 1912 scope.go:117] "RemoveContainer" containerID="2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161" May 14 00:47:57.326594 env[1213]: time="2025-05-14T00:47:57.326555011Z" level=info msg="RemoveContainer for \"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161\"" May 14 00:47:57.328938 env[1213]: time="2025-05-14T00:47:57.328903310Z" level=info msg="RemoveContainer for \"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161\" returns successfully" May 14 00:47:57.329119 kubelet[1912]: I0514 00:47:57.329098 1912 scope.go:117] "RemoveContainer" containerID="b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf" May 14 00:47:57.331049 env[1213]: time="2025-05-14T00:47:57.330891051Z" level=error msg="ContainerStatus for \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\": not found" May 14 00:47:57.332342 kubelet[1912]: E0514 00:47:57.332296 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\": not found" containerID="b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf" May 14 00:47:57.332421 kubelet[1912]: I0514 00:47:57.332339 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf"} err="failed to get container status \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"b22e2d5d25fce34a0936263ab41d2a4a2cfe30549a864c38be6e9d17b022a8bf\": not found" May 14 00:47:57.332421 kubelet[1912]: I0514 00:47:57.332366 1912 scope.go:117] "RemoveContainer" containerID="01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8" May 14 00:47:57.332847 env[1213]: time="2025-05-14T00:47:57.332775674Z" level=error msg="ContainerStatus for \"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8\": not found" May 14 00:47:57.333308 kubelet[1912]: E0514 00:47:57.333287 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8\": not found" containerID="01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8" May 14 00:47:57.333348 kubelet[1912]: I0514 00:47:57.333317 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8"} err="failed to get container status \"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"01396fd949761d0bdccedc5970c44f5cd5e37aba09232c4a20d5f56eba2821e8\": not found" May 14 00:47:57.333348 kubelet[1912]: I0514 00:47:57.333335 1912 scope.go:117] "RemoveContainer" containerID="212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c" May 14 00:47:57.333696 env[1213]: time="2025-05-14T00:47:57.333631786Z" level=error msg="ContainerStatus for \"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c\": not found" May 14 00:47:57.334532 kubelet[1912]: E0514 00:47:57.334462 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c\": not found" containerID="212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c" May 14 00:47:57.334596 kubelet[1912]: I0514 00:47:57.334530 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c"} err="failed to get container status \"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"212a7205ec4143432eb3a59647e7bebeec0b27e33635d6ba42e27a42e1643e9c\": not found" May 14 00:47:57.334596 kubelet[1912]: I0514 00:47:57.334547 1912 scope.go:117] "RemoveContainer" containerID="2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5" May 14 00:47:57.335351 env[1213]: time="2025-05-14T00:47:57.335245771Z" level=error msg="ContainerStatus for \"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5\": not found" May 14 00:47:57.335650 kubelet[1912]: E0514 00:47:57.335624 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5\": not found" containerID="2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5" May 14 00:47:57.335696 kubelet[1912]: I0514 00:47:57.335656 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5"} err="failed to get container status \"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d13cd05d223c90566c88567a4d83bc67fa65642478bf1f879124979fe132dc5\": not found" May 14 00:47:57.335696 kubelet[1912]: I0514 00:47:57.335676 1912 scope.go:117] "RemoveContainer" containerID="2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161" May 14 00:47:57.335927 env[1213]: time="2025-05-14T00:47:57.335865445Z" level=error msg="ContainerStatus for \"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161\": not found" May 14 00:47:57.336045 kubelet[1912]: E0514 00:47:57.336024 1912 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161\": not found" containerID="2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161" May 14 00:47:57.336083 kubelet[1912]: I0514 00:47:57.336052 1912 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161"} err="failed to get container status \"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161\": rpc error: code = NotFound desc = an error occurred when try to find container \"2df6d0c6333c358f06de15f0aaf995a34dc9285ccb0ea577dd43bd63b8235161\": not found" May 14 00:47:57.465681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45-rootfs.mount: Deactivated successfully. May 14 00:47:57.465787 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83cd0bff4744d0f63e273ed452ff1c5609548ad80cf18ce4549aea0ae40fbb45-shm.mount: Deactivated successfully. May 14 00:47:57.465854 systemd[1]: var-lib-kubelet-pods-51df70e7\x2d881a\x2d49a3\x2d9802\x2d4799eae1e484-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:47:57.465904 systemd[1]: var-lib-kubelet-pods-a66d07d7\x2d1b8d\x2d4c84\x2d9dd4\x2d07cb969c76df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsks6p.mount: Deactivated successfully. May 14 00:47:57.465963 systemd[1]: var-lib-kubelet-pods-51df70e7\x2d881a\x2d49a3\x2d9802\x2d4799eae1e484-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz5t8t.mount: Deactivated successfully. May 14 00:47:57.466019 systemd[1]: var-lib-kubelet-pods-51df70e7\x2d881a\x2d49a3\x2d9802\x2d4799eae1e484-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:47:58.101657 kubelet[1912]: E0514 00:47:58.101621 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:47:58.104261 kubelet[1912]: I0514 00:47:58.104218 1912 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51df70e7-881a-49a3-9802-4799eae1e484" path="/var/lib/kubelet/pods/51df70e7-881a-49a3-9802-4799eae1e484/volumes" May 14 00:47:58.104839 kubelet[1912]: I0514 00:47:58.104818 1912 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a66d07d7-1b8d-4c84-9dd4-07cb969c76df" path="/var/lib/kubelet/pods/a66d07d7-1b8d-4c84-9dd4-07cb969c76df/volumes" May 14 00:47:58.424386 sshd[3528]: pam_unix(sshd:session): session closed for user core May 14 00:47:58.428380 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:38950.service. May 14 00:47:58.430203 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:47:58.430410 systemd[1]: session-21.scope: Consumed 1.241s CPU time. May 14 00:47:58.430822 systemd-logind[1202]: Session 21 logged out. Waiting for processes to exit. May 14 00:47:58.430937 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:38936.service: Deactivated successfully. May 14 00:47:58.431828 systemd-logind[1202]: Removed session 21. May 14 00:47:58.468331 sshd[3692]: Accepted publickey for core from 10.0.0.1 port 38950 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:47:58.469645 sshd[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:47:58.473130 systemd-logind[1202]: New session 22 of user core. May 14 00:47:58.473981 systemd[1]: Started session-22.scope. May 14 00:48:00.205912 sshd[3692]: pam_unix(sshd:session): session closed for user core May 14 00:48:00.209115 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:38956.service. May 14 00:48:00.210760 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:38950.service: Deactivated successfully. May 14 00:48:00.211446 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:48:00.211621 systemd[1]: session-22.scope: Consumed 1.624s CPU time. May 14 00:48:00.212912 systemd-logind[1202]: Session 22 logged out. Waiting for processes to exit. May 14 00:48:00.216439 systemd-logind[1202]: Removed session 22. May 14 00:48:00.236447 kubelet[1912]: E0514 00:48:00.236392 1912 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a66d07d7-1b8d-4c84-9dd4-07cb969c76df" containerName="cilium-operator" May 14 00:48:00.236447 kubelet[1912]: E0514 00:48:00.236427 1912 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51df70e7-881a-49a3-9802-4799eae1e484" containerName="apply-sysctl-overwrites" May 14 00:48:00.236447 kubelet[1912]: E0514 00:48:00.236434 1912 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51df70e7-881a-49a3-9802-4799eae1e484" containerName="mount-bpf-fs" May 14 00:48:00.236447 kubelet[1912]: E0514 00:48:00.236441 1912 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51df70e7-881a-49a3-9802-4799eae1e484" containerName="clean-cilium-state" May 14 00:48:00.236447 kubelet[1912]: E0514 00:48:00.236448 1912 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51df70e7-881a-49a3-9802-4799eae1e484" containerName="mount-cgroup" May 14 00:48:00.236447 kubelet[1912]: E0514 00:48:00.236454 1912 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51df70e7-881a-49a3-9802-4799eae1e484" containerName="cilium-agent" May 14 00:48:00.236931 kubelet[1912]: I0514 00:48:00.236506 1912 memory_manager.go:354] "RemoveStaleState removing state" podUID="a66d07d7-1b8d-4c84-9dd4-07cb969c76df" containerName="cilium-operator" May 14 00:48:00.236931 kubelet[1912]: I0514 00:48:00.236514 1912 memory_manager.go:354] "RemoveStaleState removing state" podUID="51df70e7-881a-49a3-9802-4799eae1e484" containerName="cilium-agent" May 14 00:48:00.242656 systemd[1]: Created slice kubepods-burstable-podbaad6806_d9f6_4126_84be_c5e925a17833.slice. May 14 00:48:00.256171 sshd[3704]: Accepted publickey for core from 10.0.0.1 port 38956 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:48:00.257431 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:48:00.264930 systemd[1]: Started session-23.scope. May 14 00:48:00.266401 kubelet[1912]: I0514 00:48:00.265532 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-host-proc-sys-kernel\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266401 kubelet[1912]: I0514 00:48:00.265564 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-etc-cni-netd\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266401 kubelet[1912]: I0514 00:48:00.265593 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baad6806-d9f6-4126-84be-c5e925a17833-cilium-config-path\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266401 kubelet[1912]: I0514 00:48:00.265611 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/baad6806-d9f6-4126-84be-c5e925a17833-hubble-tls\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266401 kubelet[1912]: I0514 00:48:00.265628 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cilium-cgroup\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266401 kubelet[1912]: I0514 00:48:00.265643 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-hostproc\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.265376 systemd-logind[1202]: New session 23 of user core. May 14 00:48:00.266628 kubelet[1912]: I0514 00:48:00.265665 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cni-path\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266628 kubelet[1912]: I0514 00:48:00.265680 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-lib-modules\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266628 kubelet[1912]: I0514 00:48:00.265695 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/baad6806-d9f6-4126-84be-c5e925a17833-clustermesh-secrets\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266628 kubelet[1912]: I0514 00:48:00.265709 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mcz\" (UniqueName: \"kubernetes.io/projected/baad6806-d9f6-4126-84be-c5e925a17833-kube-api-access-w5mcz\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266628 kubelet[1912]: I0514 00:48:00.265725 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-bpf-maps\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266628 kubelet[1912]: I0514 00:48:00.265747 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/baad6806-d9f6-4126-84be-c5e925a17833-cilium-ipsec-secrets\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266801 kubelet[1912]: I0514 00:48:00.265763 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-host-proc-sys-net\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266801 kubelet[1912]: I0514 00:48:00.265778 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cilium-run\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.266801 kubelet[1912]: I0514 00:48:00.265792 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-xtables-lock\") pod \"cilium-phn8m\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " pod="kube-system/cilium-phn8m" May 14 00:48:00.393122 sshd[3704]: pam_unix(sshd:session): session closed for user core May 14 00:48:00.395588 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:38956.service: Deactivated successfully. May 14 00:48:00.396168 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:48:00.396983 systemd-logind[1202]: Session 23 logged out. Waiting for processes to exit. May 14 00:48:00.398729 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:38966.service. May 14 00:48:00.401619 systemd-logind[1202]: Removed session 23. May 14 00:48:00.407492 kubelet[1912]: E0514 00:48:00.407449 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:00.408564 env[1213]: time="2025-05-14T00:48:00.408133796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-phn8m,Uid:baad6806-d9f6-4126-84be-c5e925a17833,Namespace:kube-system,Attempt:0,}" May 14 00:48:00.422834 env[1213]: time="2025-05-14T00:48:00.422771020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:48:00.422928 env[1213]: time="2025-05-14T00:48:00.422838460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:48:00.422928 env[1213]: time="2025-05-14T00:48:00.422866460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:48:00.423123 env[1213]: time="2025-05-14T00:48:00.423090817Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d pid=3731 runtime=io.containerd.runc.v2 May 14 00:48:00.446293 sshd[3722]: Accepted publickey for core from 10.0.0.1 port 38966 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:48:00.446072 systemd[1]: Started cri-containerd-9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d.scope. May 14 00:48:00.448162 sshd[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:48:00.458766 systemd-logind[1202]: New session 24 of user core. May 14 00:48:00.459655 systemd[1]: Started session-24.scope. May 14 00:48:00.488815 env[1213]: time="2025-05-14T00:48:00.488766528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-phn8m,Uid:baad6806-d9f6-4126-84be-c5e925a17833,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d\"" May 14 00:48:00.490016 kubelet[1912]: E0514 00:48:00.489548 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:00.491563 env[1213]: time="2025-05-14T00:48:00.491533182Z" level=info msg="CreateContainer within sandbox \"9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:48:00.501660 env[1213]: time="2025-05-14T00:48:00.501612169Z" level=info msg="CreateContainer within sandbox \"9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d\"" May 14 00:48:00.502283 env[1213]: time="2025-05-14T00:48:00.502256483Z" level=info msg="StartContainer for \"9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d\"" May 14 00:48:00.521061 systemd[1]: Started cri-containerd-9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d.scope. May 14 00:48:00.537962 systemd[1]: cri-containerd-9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d.scope: Deactivated successfully. May 14 00:48:00.562168 env[1213]: time="2025-05-14T00:48:00.562112927Z" level=info msg="shim disconnected" id=9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d May 14 00:48:00.562168 env[1213]: time="2025-05-14T00:48:00.562170806Z" level=warning msg="cleaning up after shim disconnected" id=9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d namespace=k8s.io May 14 00:48:00.562458 env[1213]: time="2025-05-14T00:48:00.562179686Z" level=info msg="cleaning up dead shim" May 14 00:48:00.571028 env[1213]: time="2025-05-14T00:48:00.570977485Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:48:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3797 runtime=io.containerd.runc.v2\ntime=\"2025-05-14T00:48:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 14 00:48:00.571366 env[1213]: time="2025-05-14T00:48:00.571263962Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" May 14 00:48:00.571679 env[1213]: time="2025-05-14T00:48:00.571640398Z" level=error msg="Failed to pipe stderr of container \"9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d\"" error="reading from a closed fifo" May 14 00:48:00.575853 env[1213]: time="2025-05-14T00:48:00.575801720Z" level=error msg="Failed to pipe stdout of container \"9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d\"" error="reading from a closed fifo" May 14 00:48:00.581976 env[1213]: time="2025-05-14T00:48:00.581913223Z" level=error msg="StartContainer for \"9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 14 00:48:00.582353 kubelet[1912]: E0514 00:48:00.582307 1912 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d" May 14 00:48:00.583939 kubelet[1912]: E0514 00:48:00.583895 1912 kuberuntime_manager.go:1272] "Unhandled Error" err=< May 14 00:48:00.583939 kubelet[1912]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 14 00:48:00.583939 kubelet[1912]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 14 00:48:00.583939 kubelet[1912]: rm /hostbin/cilium-mount May 14 00:48:00.584286 kubelet[1912]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5mcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-phn8m_kube-system(baad6806-d9f6-4126-84be-c5e925a17833): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 14 00:48:00.584286 kubelet[1912]: > logger="UnhandledError" May 14 00:48:00.585094 kubelet[1912]: E0514 00:48:00.585057 1912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-phn8m" podUID="baad6806-d9f6-4126-84be-c5e925a17833" May 14 00:48:01.147492 kubelet[1912]: E0514 00:48:01.147428 1912 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:48:01.277540 env[1213]: time="2025-05-14T00:48:01.277490604Z" level=info msg="StopPodSandbox for \"9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d\"" May 14 00:48:01.277680 env[1213]: time="2025-05-14T00:48:01.277549803Z" level=info msg="Container to stop \"9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:48:01.283150 systemd[1]: cri-containerd-9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d.scope: Deactivated successfully. May 14 00:48:01.302758 env[1213]: time="2025-05-14T00:48:01.302699850Z" level=info msg="shim disconnected" id=9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d May 14 00:48:01.302758 env[1213]: time="2025-05-14T00:48:01.302750209Z" level=warning msg="cleaning up after shim disconnected" id=9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d namespace=k8s.io May 14 00:48:01.302758 env[1213]: time="2025-05-14T00:48:01.302760369Z" level=info msg="cleaning up dead shim" May 14 00:48:01.309855 env[1213]: time="2025-05-14T00:48:01.309819383Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:48:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3827 runtime=io.containerd.runc.v2\n" May 14 00:48:01.310136 env[1213]: time="2025-05-14T00:48:01.310106141Z" level=info msg="TearDown network for sandbox \"9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d\" successfully" May 14 00:48:01.310136 env[1213]: time="2025-05-14T00:48:01.310128781Z" level=info msg="StopPodSandbox for \"9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d\" returns successfully" May 14 00:48:01.370639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c182e7657f1bf243d252b343249e3f990713b785c724a4d2ed27c0b0df7de9d-shm.mount: Deactivated successfully. May 14 00:48:01.373804 kubelet[1912]: I0514 00:48:01.373768 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-bpf-maps\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373812 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cilium-run\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373828 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-hostproc\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373851 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/baad6806-d9f6-4126-84be-c5e925a17833-cilium-ipsec-secrets\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373872 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baad6806-d9f6-4126-84be-c5e925a17833-cilium-config-path\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373889 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/baad6806-d9f6-4126-84be-c5e925a17833-clustermesh-secrets\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373906 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5mcz\" (UniqueName: \"kubernetes.io/projected/baad6806-d9f6-4126-84be-c5e925a17833-kube-api-access-w5mcz\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373921 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-host-proc-sys-kernel\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373936 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-etc-cni-netd\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373951 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-xtables-lock\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373964 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cni-path\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373978 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-lib-modules\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.373991 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-host-proc-sys-net\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.374007 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/baad6806-d9f6-4126-84be-c5e925a17833-hubble-tls\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.374021 1912 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cilium-cgroup\") pod \"baad6806-d9f6-4126-84be-c5e925a17833\" (UID: \"baad6806-d9f6-4126-84be-c5e925a17833\") " May 14 00:48:01.374085 kubelet[1912]: I0514 00:48:01.374082 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:48:01.374438 kubelet[1912]: I0514 00:48:01.374108 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:48:01.374438 kubelet[1912]: I0514 00:48:01.374122 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:48:01.374438 kubelet[1912]: I0514 00:48:01.374136 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-hostproc" (OuterVolumeSpecName: "hostproc") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:48:01.374520 kubelet[1912]: I0514 00:48:01.374473 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:48:01.376504 kubelet[1912]: I0514 00:48:01.376460 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baad6806-d9f6-4126-84be-c5e925a17833-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:48:01.376798 kubelet[1912]: I0514 00:48:01.376761 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:48:01.376849 kubelet[1912]: I0514 00:48:01.376808 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cni-path" (OuterVolumeSpecName: "cni-path") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:48:01.376945 kubelet[1912]: I0514 00:48:01.376896 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:48:01.377046 kubelet[1912]: I0514 00:48:01.377032 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:48:01.377123 kubelet[1912]: I0514 00:48:01.377108 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:48:01.377788 kubelet[1912]: I0514 00:48:01.377752 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baad6806-d9f6-4126-84be-c5e925a17833-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:48:01.378048 systemd[1]: var-lib-kubelet-pods-baad6806\x2dd9f6\x2d4126\x2d84be\x2dc5e925a17833-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 14 00:48:01.379175 kubelet[1912]: I0514 00:48:01.379138 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baad6806-d9f6-4126-84be-c5e925a17833-kube-api-access-w5mcz" (OuterVolumeSpecName: "kube-api-access-w5mcz") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "kube-api-access-w5mcz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:48:01.380015 systemd[1]: var-lib-kubelet-pods-baad6806\x2dd9f6\x2d4126\x2d84be\x2dc5e925a17833-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5mcz.mount: Deactivated successfully. May 14 00:48:01.380513 kubelet[1912]: I0514 00:48:01.380486 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baad6806-d9f6-4126-84be-c5e925a17833-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:48:01.380589 kubelet[1912]: I0514 00:48:01.380567 1912 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baad6806-d9f6-4126-84be-c5e925a17833-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "baad6806-d9f6-4126-84be-c5e925a17833" (UID: "baad6806-d9f6-4126-84be-c5e925a17833"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:48:01.381744 systemd[1]: var-lib-kubelet-pods-baad6806\x2dd9f6\x2d4126\x2d84be\x2dc5e925a17833-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:48:01.381830 systemd[1]: var-lib-kubelet-pods-baad6806\x2dd9f6\x2d4126\x2d84be\x2dc5e925a17833-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:48:01.474606 kubelet[1912]: I0514 00:48:01.474559 1912 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474606 kubelet[1912]: I0514 00:48:01.474598 1912 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474606 kubelet[1912]: I0514 00:48:01.474608 1912 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474606 kubelet[1912]: I0514 00:48:01.474617 1912 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/baad6806-d9f6-4126-84be-c5e925a17833-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474840 kubelet[1912]: I0514 00:48:01.474627 1912 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baad6806-d9f6-4126-84be-c5e925a17833-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474840 kubelet[1912]: I0514 00:48:01.474635 1912 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/baad6806-d9f6-4126-84be-c5e925a17833-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474840 kubelet[1912]: I0514 00:48:01.474644 1912 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w5mcz\" (UniqueName: \"kubernetes.io/projected/baad6806-d9f6-4126-84be-c5e925a17833-kube-api-access-w5mcz\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474840 kubelet[1912]: I0514 00:48:01.474652 1912 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474840 kubelet[1912]: I0514 00:48:01.474660 1912 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474840 kubelet[1912]: I0514 00:48:01.474666 1912 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474840 kubelet[1912]: I0514 00:48:01.474673 1912 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474840 kubelet[1912]: I0514 00:48:01.474680 1912 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474840 kubelet[1912]: I0514 00:48:01.474688 1912 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474840 kubelet[1912]: I0514 00:48:01.474696 1912 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/baad6806-d9f6-4126-84be-c5e925a17833-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 00:48:01.474840 kubelet[1912]: I0514 00:48:01.474705 1912 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/baad6806-d9f6-4126-84be-c5e925a17833-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 00:48:02.101825 kubelet[1912]: E0514 00:48:02.101776 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:02.108076 systemd[1]: Removed slice kubepods-burstable-podbaad6806_d9f6_4126_84be_c5e925a17833.slice. May 14 00:48:02.280543 kubelet[1912]: I0514 00:48:02.280513 1912 scope.go:117] "RemoveContainer" containerID="9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d" May 14 00:48:02.281456 env[1213]: time="2025-05-14T00:48:02.281407116Z" level=info msg="RemoveContainer for \"9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d\"" May 14 00:48:02.285668 env[1213]: time="2025-05-14T00:48:02.285625236Z" level=info msg="RemoveContainer for \"9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d\" returns successfully" May 14 00:48:02.319295 kubelet[1912]: E0514 00:48:02.319261 1912 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="baad6806-d9f6-4126-84be-c5e925a17833" containerName="mount-cgroup" May 14 00:48:02.319493 kubelet[1912]: I0514 00:48:02.319478 1912 memory_manager.go:354] "RemoveStaleState removing state" podUID="baad6806-d9f6-4126-84be-c5e925a17833" containerName="mount-cgroup" May 14 00:48:02.324307 systemd[1]: Created slice kubepods-burstable-podd2297aa5_f6a6_4315_8708_8d23eb8a9c63.slice. May 14 00:48:02.379089 kubelet[1912]: I0514 00:48:02.378989 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-cilium-run\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.379666 kubelet[1912]: I0514 00:48:02.379641 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-xtables-lock\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.379772 kubelet[1912]: I0514 00:48:02.379758 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-hostproc\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.379843 kubelet[1912]: I0514 00:48:02.379831 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-lib-modules\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.379910 kubelet[1912]: I0514 00:48:02.379898 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-host-proc-sys-net\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.379976 kubelet[1912]: I0514 00:48:02.379965 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-etc-cni-netd\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.380049 kubelet[1912]: I0514 00:48:02.380037 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-host-proc-sys-kernel\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.380118 kubelet[1912]: I0514 00:48:02.380107 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-cilium-cgroup\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.380189 kubelet[1912]: I0514 00:48:02.380176 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-cilium-config-path\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.380365 kubelet[1912]: I0514 00:48:02.380245 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-cilium-ipsec-secrets\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.380473 kubelet[1912]: I0514 00:48:02.380447 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kb9z\" (UniqueName: \"kubernetes.io/projected/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-kube-api-access-9kb9z\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.380731 kubelet[1912]: I0514 00:48:02.380671 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-cni-path\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.380917 kubelet[1912]: I0514 00:48:02.380894 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-clustermesh-secrets\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.381098 kubelet[1912]: I0514 00:48:02.381034 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-bpf-maps\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.381286 kubelet[1912]: I0514 00:48:02.381227 1912 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2297aa5-f6a6-4315-8708-8d23eb8a9c63-hubble-tls\") pod \"cilium-5l6l7\" (UID: \"d2297aa5-f6a6-4315-8708-8d23eb8a9c63\") " pod="kube-system/cilium-5l6l7" May 14 00:48:02.626349 kubelet[1912]: E0514 00:48:02.626316 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:02.627032 env[1213]: time="2025-05-14T00:48:02.626969343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5l6l7,Uid:d2297aa5-f6a6-4315-8708-8d23eb8a9c63,Namespace:kube-system,Attempt:0,}" May 14 00:48:02.640529 env[1213]: time="2025-05-14T00:48:02.640397698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:48:02.640529 env[1213]: time="2025-05-14T00:48:02.640436498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:48:02.640529 env[1213]: time="2025-05-14T00:48:02.640451058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:48:02.642181 env[1213]: time="2025-05-14T00:48:02.641025733Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc pid=3856 runtime=io.containerd.runc.v2 May 14 00:48:02.659503 systemd[1]: Started cri-containerd-6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc.scope. May 14 00:48:02.689078 env[1213]: time="2025-05-14T00:48:02.689033086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5l6l7,Uid:d2297aa5-f6a6-4315-8708-8d23eb8a9c63,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc\"" May 14 00:48:02.690902 kubelet[1912]: E0514 00:48:02.689806 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:02.694534 env[1213]: time="2025-05-14T00:48:02.694494395Z" level=info msg="CreateContainer within sandbox \"6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:48:02.705492 env[1213]: time="2025-05-14T00:48:02.705430174Z" level=info msg="CreateContainer within sandbox \"6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"80522a911dd02cb50e369f8407663e4f190da0a84be75dd1e0d3ef929068218e\"" May 14 00:48:02.706328 env[1213]: time="2025-05-14T00:48:02.706299286Z" level=info msg="StartContainer for \"80522a911dd02cb50e369f8407663e4f190da0a84be75dd1e0d3ef929068218e\"" May 14 00:48:02.720506 systemd[1]: Started cri-containerd-80522a911dd02cb50e369f8407663e4f190da0a84be75dd1e0d3ef929068218e.scope. May 14 00:48:02.760440 env[1213]: time="2025-05-14T00:48:02.760396143Z" level=info msg="StartContainer for \"80522a911dd02cb50e369f8407663e4f190da0a84be75dd1e0d3ef929068218e\" returns successfully" May 14 00:48:02.766587 systemd[1]: cri-containerd-80522a911dd02cb50e369f8407663e4f190da0a84be75dd1e0d3ef929068218e.scope: Deactivated successfully. May 14 00:48:02.788086 env[1213]: time="2025-05-14T00:48:02.788025686Z" level=info msg="shim disconnected" id=80522a911dd02cb50e369f8407663e4f190da0a84be75dd1e0d3ef929068218e May 14 00:48:02.788086 env[1213]: time="2025-05-14T00:48:02.788073726Z" level=warning msg="cleaning up after shim disconnected" id=80522a911dd02cb50e369f8407663e4f190da0a84be75dd1e0d3ef929068218e namespace=k8s.io May 14 00:48:02.788086 env[1213]: time="2025-05-14T00:48:02.788083925Z" level=info msg="cleaning up dead shim" May 14 00:48:02.794226 env[1213]: time="2025-05-14T00:48:02.794184749Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:48:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3940 runtime=io.containerd.runc.v2\n" May 14 00:48:03.284303 kubelet[1912]: E0514 00:48:03.284263 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:03.286805 env[1213]: time="2025-05-14T00:48:03.286084214Z" level=info msg="CreateContainer within sandbox \"6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:48:03.306879 env[1213]: time="2025-05-14T00:48:03.306822701Z" level=info msg="CreateContainer within sandbox \"6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c115d195677cc0b2c518747791565f2d5106c46bf97fca5fc0254baf63d968a3\"" May 14 00:48:03.307576 env[1213]: time="2025-05-14T00:48:03.307546455Z" level=info msg="StartContainer for \"c115d195677cc0b2c518747791565f2d5106c46bf97fca5fc0254baf63d968a3\"" May 14 00:48:03.320879 systemd[1]: Started cri-containerd-c115d195677cc0b2c518747791565f2d5106c46bf97fca5fc0254baf63d968a3.scope. May 14 00:48:03.353565 env[1213]: time="2025-05-14T00:48:03.353519547Z" level=info msg="StartContainer for \"c115d195677cc0b2c518747791565f2d5106c46bf97fca5fc0254baf63d968a3\" returns successfully" May 14 00:48:03.357723 systemd[1]: cri-containerd-c115d195677cc0b2c518747791565f2d5106c46bf97fca5fc0254baf63d968a3.scope: Deactivated successfully. May 14 00:48:03.388241 env[1213]: time="2025-05-14T00:48:03.388162425Z" level=info msg="shim disconnected" id=c115d195677cc0b2c518747791565f2d5106c46bf97fca5fc0254baf63d968a3 May 14 00:48:03.388241 env[1213]: time="2025-05-14T00:48:03.388215864Z" level=warning msg="cleaning up after shim disconnected" id=c115d195677cc0b2c518747791565f2d5106c46bf97fca5fc0254baf63d968a3 namespace=k8s.io May 14 00:48:03.388241 env[1213]: time="2025-05-14T00:48:03.388225784Z" level=info msg="cleaning up dead shim" May 14 00:48:03.394349 env[1213]: time="2025-05-14T00:48:03.394308208Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:48:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4001 runtime=io.containerd.runc.v2\n" May 14 00:48:03.667909 kubelet[1912]: W0514 00:48:03.667508 1912 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaad6806_d9f6_4126_84be_c5e925a17833.slice/cri-containerd-9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d.scope WatchSource:0}: container "9790429799095e6fad3ee000506f8c4b5a481e6456b2a0032b86c0aa0e7ebe2d" in namespace "k8s.io": not found May 14 00:48:04.103694 kubelet[1912]: I0514 00:48:04.103641 1912 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baad6806-d9f6-4126-84be-c5e925a17833" path="/var/lib/kubelet/pods/baad6806-d9f6-4126-84be-c5e925a17833/volumes" May 14 00:48:04.289302 kubelet[1912]: E0514 00:48:04.288748 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:04.291137 env[1213]: time="2025-05-14T00:48:04.291085344Z" level=info msg="CreateContainer within sandbox \"6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:48:04.306706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130437671.mount: Deactivated successfully. May 14 00:48:04.313237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397996323.mount: Deactivated successfully. May 14 00:48:04.317330 env[1213]: time="2025-05-14T00:48:04.317274980Z" level=info msg="CreateContainer within sandbox \"6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b39c2421ace2eec81f8b46602976dc73af421f94ab6d981e84882d1d01c1d5ee\"" May 14 00:48:04.318064 env[1213]: time="2025-05-14T00:48:04.318033733Z" level=info msg="StartContainer for \"b39c2421ace2eec81f8b46602976dc73af421f94ab6d981e84882d1d01c1d5ee\"" May 14 00:48:04.333396 systemd[1]: Started cri-containerd-b39c2421ace2eec81f8b46602976dc73af421f94ab6d981e84882d1d01c1d5ee.scope. May 14 00:48:04.370014 systemd[1]: cri-containerd-b39c2421ace2eec81f8b46602976dc73af421f94ab6d981e84882d1d01c1d5ee.scope: Deactivated successfully. May 14 00:48:04.386139 env[1213]: time="2025-05-14T00:48:04.386087260Z" level=info msg="StartContainer for \"b39c2421ace2eec81f8b46602976dc73af421f94ab6d981e84882d1d01c1d5ee\" returns successfully" May 14 00:48:04.407792 env[1213]: time="2025-05-14T00:48:04.407730658Z" level=info msg="shim disconnected" id=b39c2421ace2eec81f8b46602976dc73af421f94ab6d981e84882d1d01c1d5ee May 14 00:48:04.407792 env[1213]: time="2025-05-14T00:48:04.407778338Z" level=warning msg="cleaning up after shim disconnected" id=b39c2421ace2eec81f8b46602976dc73af421f94ab6d981e84882d1d01c1d5ee namespace=k8s.io May 14 00:48:04.407792 env[1213]: time="2025-05-14T00:48:04.407788018Z" level=info msg="cleaning up dead shim" May 14 00:48:04.414310 env[1213]: time="2025-05-14T00:48:04.414268637Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:48:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4058 runtime=io.containerd.runc.v2\n" May 14 00:48:05.291582 kubelet[1912]: E0514 00:48:05.291535 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:05.293662 env[1213]: time="2025-05-14T00:48:05.293622051Z" level=info msg="CreateContainer within sandbox \"6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:48:05.305676 env[1213]: time="2025-05-14T00:48:05.305616819Z" level=info msg="CreateContainer within sandbox \"6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a07ba3e2e1b6e7e10ba9272ac37da745d8a8ef4529312d06e36baabf8eb8b5a\"" May 14 00:48:05.306139 env[1213]: time="2025-05-14T00:48:05.306110055Z" level=info msg="StartContainer for \"4a07ba3e2e1b6e7e10ba9272ac37da745d8a8ef4529312d06e36baabf8eb8b5a\"" May 14 00:48:05.323033 systemd[1]: Started cri-containerd-4a07ba3e2e1b6e7e10ba9272ac37da745d8a8ef4529312d06e36baabf8eb8b5a.scope. May 14 00:48:05.347055 systemd[1]: cri-containerd-4a07ba3e2e1b6e7e10ba9272ac37da745d8a8ef4529312d06e36baabf8eb8b5a.scope: Deactivated successfully. May 14 00:48:05.348231 env[1213]: time="2025-05-14T00:48:05.347899425Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2297aa5_f6a6_4315_8708_8d23eb8a9c63.slice/cri-containerd-4a07ba3e2e1b6e7e10ba9272ac37da745d8a8ef4529312d06e36baabf8eb8b5a.scope/memory.events\": no such file or directory" May 14 00:48:05.349878 env[1213]: time="2025-05-14T00:48:05.349838407Z" level=info msg="StartContainer for \"4a07ba3e2e1b6e7e10ba9272ac37da745d8a8ef4529312d06e36baabf8eb8b5a\" returns successfully" May 14 00:48:05.369020 env[1213]: time="2025-05-14T00:48:05.368968949Z" level=info msg="shim disconnected" id=4a07ba3e2e1b6e7e10ba9272ac37da745d8a8ef4529312d06e36baabf8eb8b5a May 14 00:48:05.369020 env[1213]: time="2025-05-14T00:48:05.369015789Z" level=warning msg="cleaning up after shim disconnected" id=4a07ba3e2e1b6e7e10ba9272ac37da745d8a8ef4529312d06e36baabf8eb8b5a namespace=k8s.io May 14 00:48:05.369020 env[1213]: time="2025-05-14T00:48:05.369025709Z" level=info msg="cleaning up dead shim" May 14 00:48:05.375678 env[1213]: time="2025-05-14T00:48:05.375642567Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:48:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4111 runtime=io.containerd.runc.v2\n" May 14 00:48:05.487931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a07ba3e2e1b6e7e10ba9272ac37da745d8a8ef4529312d06e36baabf8eb8b5a-rootfs.mount: Deactivated successfully. May 14 00:48:06.148760 kubelet[1912]: E0514 00:48:06.148725 1912 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:48:06.296313 kubelet[1912]: E0514 00:48:06.296270 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:06.298824 env[1213]: time="2025-05-14T00:48:06.297916976Z" level=info msg="CreateContainer within sandbox \"6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:48:06.323259 env[1213]: time="2025-05-14T00:48:06.323196420Z" level=info msg="CreateContainer within sandbox \"6f526728fa4a974aefca5986e7c5f8987590c3fd5d5f69a677bac133194b73fc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"34b246a19fee770942e3caddf3a984c0f92a0af26913d6f5f94718a3a796b6c2\"" May 14 00:48:06.323923 env[1213]: time="2025-05-14T00:48:06.323883454Z" level=info msg="StartContainer for \"34b246a19fee770942e3caddf3a984c0f92a0af26913d6f5f94718a3a796b6c2\"" May 14 00:48:06.343955 systemd[1]: Started cri-containerd-34b246a19fee770942e3caddf3a984c0f92a0af26913d6f5f94718a3a796b6c2.scope. May 14 00:48:06.384279 env[1213]: time="2025-05-14T00:48:06.384201332Z" level=info msg="StartContainer for \"34b246a19fee770942e3caddf3a984c0f92a0af26913d6f5f94718a3a796b6c2\" returns successfully" May 14 00:48:06.674372 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 14 00:48:06.777777 kubelet[1912]: W0514 00:48:06.775339 1912 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2297aa5_f6a6_4315_8708_8d23eb8a9c63.slice/cri-containerd-80522a911dd02cb50e369f8407663e4f190da0a84be75dd1e0d3ef929068218e.scope WatchSource:0}: task 80522a911dd02cb50e369f8407663e4f190da0a84be75dd1e0d3ef929068218e not found: not found May 14 00:48:07.303662 kubelet[1912]: E0514 00:48:07.303628 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:07.322220 kubelet[1912]: I0514 00:48:07.322160 1912 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5l6l7" podStartSLOduration=5.322142589 podStartE2EDuration="5.322142589s" podCreationTimestamp="2025-05-14 00:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:48:07.321851512 +0000 UTC m=+81.304515630" watchObservedRunningTime="2025-05-14 00:48:07.322142589 +0000 UTC m=+81.304806707" May 14 00:48:07.486822 kubelet[1912]: I0514 00:48:07.486745 1912 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T00:48:07Z","lastTransitionTime":"2025-05-14T00:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 00:48:08.628080 kubelet[1912]: E0514 00:48:08.628043 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:08.747391 systemd[1]: run-containerd-runc-k8s.io-34b246a19fee770942e3caddf3a984c0f92a0af26913d6f5f94718a3a796b6c2-runc.lVmUzR.mount: Deactivated successfully. May 14 00:48:09.535030 systemd-networkd[1037]: lxc_health: Link UP May 14 00:48:09.544660 systemd-networkd[1037]: lxc_health: Gained carrier May 14 00:48:09.545281 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 14 00:48:09.882784 kubelet[1912]: W0514 00:48:09.882668 1912 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2297aa5_f6a6_4315_8708_8d23eb8a9c63.slice/cri-containerd-c115d195677cc0b2c518747791565f2d5106c46bf97fca5fc0254baf63d968a3.scope WatchSource:0}: task c115d195677cc0b2c518747791565f2d5106c46bf97fca5fc0254baf63d968a3 not found: not found May 14 00:48:10.101699 kubelet[1912]: E0514 00:48:10.101645 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:10.628748 kubelet[1912]: E0514 00:48:10.628709 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:10.698412 systemd-networkd[1037]: lxc_health: Gained IPv6LL May 14 00:48:10.875860 systemd[1]: run-containerd-runc-k8s.io-34b246a19fee770942e3caddf3a984c0f92a0af26913d6f5f94718a3a796b6c2-runc.0bb1sm.mount: Deactivated successfully. May 14 00:48:11.310063 kubelet[1912]: E0514 00:48:11.310023 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:12.311541 kubelet[1912]: E0514 00:48:12.311498 1912 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:48:12.989997 kubelet[1912]: W0514 00:48:12.989952 1912 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2297aa5_f6a6_4315_8708_8d23eb8a9c63.slice/cri-containerd-b39c2421ace2eec81f8b46602976dc73af421f94ab6d981e84882d1d01c1d5ee.scope WatchSource:0}: task b39c2421ace2eec81f8b46602976dc73af421f94ab6d981e84882d1d01c1d5ee not found: not found May 14 00:48:15.142150 systemd[1]: run-containerd-runc-k8s.io-34b246a19fee770942e3caddf3a984c0f92a0af26913d6f5f94718a3a796b6c2-runc.dtBULG.mount: Deactivated successfully. May 14 00:48:15.208702 sshd[3722]: pam_unix(sshd:session): session closed for user core May 14 00:48:15.211039 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:38966.service: Deactivated successfully. May 14 00:48:15.211773 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:48:15.212468 systemd-logind[1202]: Session 24 logged out. Waiting for processes to exit. May 14 00:48:15.213327 systemd-logind[1202]: Removed session 24. May 14 00:48:16.096562 kubelet[1912]: W0514 00:48:16.096510 1912 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2297aa5_f6a6_4315_8708_8d23eb8a9c63.slice/cri-containerd-4a07ba3e2e1b6e7e10ba9272ac37da745d8a8ef4529312d06e36baabf8eb8b5a.scope WatchSource:0}: task 4a07ba3e2e1b6e7e10ba9272ac37da745d8a8ef4529312d06e36baabf8eb8b5a not found: not found